NASA Astrophysics Data System (ADS)
Hsiao, Ming-Chih; Su, Ling-Huey
2018-02-01
This research addresses the problem of scheduling hybrid machine types, in which one type is a two-machine flowshop and another type is a single machine. A job is either processed on the two-machine flowshop or on the single machine. The objective is to determine a production schedule for all jobs so as to minimize the makespan. The problem is NP-hard since the two parallel machines problem was proved to be NP-hard. Simulated annealing algorithms are developed to solve the problem optimally. A mixed integer programming (MIP) is developed and used to evaluate the performance for two SAs. Computational experiments demonstrate the efficiency of the simulated annealing algorithms, the quality of the simulated annealing algorithms will also be reported.
NASA Technical Reports Server (NTRS)
Moore, J. E.
1975-01-01
An enumeration algorithm is presented for solving a scheduling problem similar to the single machine job shop problem with sequence dependent setup times. The scheduling problem differs from the job shop problem in two ways. First, its objective is to select an optimum subset of the available tasks to be performed during a fixed period of time. Secondly, each task scheduled is constrained to occur within its particular scheduling window. The algorithm is currently being used to develop typical observational timelines for a telescope that will be operated in earth orbit. Computational times associated with timeline development are presented.
Scheduling Jobs and a Variable Maintenance on a Single Machine with Common Due-Date Assignment
Wan, Long
2014-01-01
We investigate a common due-date assignment scheduling problem with a variable maintenance on a single machine. The goal is to minimize the total earliness, tardiness, and due-date cost. We derive some properties on an optimal solution for our problem. For a special case with identical jobs we propose an optimal polynomial time algorithm followed by a numerical example. PMID:25147861
NASA Astrophysics Data System (ADS)
Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu
2017-09-01
In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.
5 CFR 532.279 - Special wage schedules for printing positions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Opaquer 4 Offset Press Helper 5 Bindery Machine Operator (Helper) 5 Film Assembler-Stripper (Single Flat-Single Color) 5 Platemaker (Single Color) 5 Film Assembler-Stripper (Partial and Composite Flats) 7... Cutter) 8 Bindery Machine Operator (Power Folder) 8 Film Assembler-Stripper (Multiple Flat-Multiple Color...
5 CFR 532.279 - Special wage schedules for printing positions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Opaquer 4 Offset Press Helper 5 Bindery Machine Operator (Helper) 5 Film Assembler-Stripper (Single Flat-Single Color) 5 Platemaker (Single Color) 5 Film Assembler-Stripper (Partial and Composite Flats) 7... Cutter) 8 Bindery Machine Operator (Power Folder) 8 Film Assembler-Stripper (Multiple Flat-Multiple Color...
Single machine scheduling with slack due dates assignment
NASA Astrophysics Data System (ADS)
Liu, Weiguo; Hu, Xiangpei; Wang, Xuyin
2017-04-01
This paper considers a single machine scheduling problem in which each job is assigned an individual due date based on a common flow allowance (i.e. all jobs have slack due date). The goal is to find a sequence for jobs, together with a due date assignment, that minimizes a non-regular criterion comprising the total weighted absolute lateness value and common flow allowance cost, where the weight is a position-dependent weight. In order to solve this problem, an ? time algorithm is proposed. Some extensions of the problem are also shown.
NASA Astrophysics Data System (ADS)
Birgin, Ernesto G.; Ronconi, Débora P.
2012-10-01
The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.
Learning dominance relations in combinatorial search problems
NASA Technical Reports Server (NTRS)
Yu, Chee-Fen; Wah, Benjamin W.
1988-01-01
Dominance relations commonly are used to prune unnecessary nodes in search graphs, but they are problem-dependent and cannot be derived by a general procedure. The authors identify machine learning of dominance relations and the applicable learning mechanisms. A study of learning dominance relations using learning by experimentation is described. This system has been able to learn dominance relations for the 0/1-knapsack problem, an inventory problem, the reliability-by-replication problem, the two-machine flow shop problem, a number of single-machine scheduling problems, and a two-machine scheduling problem. It is considered that the same methodology can be extended to learn dominance relations in general.
Some single-machine scheduling problems with learning effects and two competing agents.
Li, Hongjie; Li, Zeyuan; Yin, Yunqiang
2014-01-01
This study considers a scheduling environment in which there are two agents and a set of jobs, each of which belongs to one of the two agents and its actual processing time is defined as a decreasing linear function of its starting time. Each of the two agents competes to process its respective jobs on a single machine and has its own scheduling objective to optimize. The objective is to assign the jobs so that the resulting schedule performs well with respect to the objectives of both agents. The objective functions addressed in this study include the maximum cost, the total weighted completion time, and the discounted total weighted completion time. We investigate three problems arising from different combinations of the objectives of the two agents. The computational complexity of the problems is discussed and solution algorithms where possible are presented.
NASA Astrophysics Data System (ADS)
Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping
2012-05-01
In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.
ERIC Educational Resources Information Center
Sukwong, Orathai
2013-01-01
Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…
Stochastic scheduling on a repairable manufacturing system
NASA Astrophysics Data System (ADS)
Li, Wei; Cao, Jinhua
1995-08-01
In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.
A parallel-machine scheduling problem with two competing agents
NASA Astrophysics Data System (ADS)
Lee, Wen-Chiung; Chung, Yu-Hsiang; Wang, Jen-Ya
2017-06-01
Scheduling with two competing agents has become popular in recent years. Most of the research has focused on single-machine problems. This article considers a parallel-machine problem, the objective of which is to minimize the total completion time of jobs from the first agent given that the maximum tardiness of jobs from the second agent cannot exceed an upper bound. The NP-hardness of this problem is also examined. A genetic algorithm equipped with local search is proposed to search for the near-optimal solution. Computational experiments are conducted to evaluate the proposed genetic algorithm.
NASA Astrophysics Data System (ADS)
Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.
2018-03-01
This research deals with a single machine batch scheduling model considering the influenced of learning, forgetting, and machine deterioration effects. The objective of the model is to minimize total inventory holding cost, and the decision variables are the number of batches (N), batch sizes (Q[i], i = 1, 2, .., N) and the sequence of processing the resulting batches. The parts to be processed are received at the right time and the right quantities, and all completed parts must be delivered at a common due date. We propose a heuristic procedure based on the Lagrange method to solve the problem. The effectiveness of the procedure is evaluated by comparing the resulting solution to the optimal solution obtained from the enumeration procedure using the integer composition technique and shows that the average effectiveness is 94%.
Assessment of New Load Schedules for the Machine Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.; Kew, R.
2015-01-01
New load schedules for the machine calibration of a six-component force balance are currently being developed and evaluated at the NASA Ames Balance Calibration Laboratory. One of the proposed load schedules is discussed in the paper. It has a total of 2082 points that are distributed across 16 load series. Several criteria were applied to define the load schedule. It was decided, for example, to specify the calibration load set in force balance format as this approach greatly simplifies the definition of the lower and upper bounds of the load schedule. In addition, all loads are assumed to be applied in a calibration machine by using the one-factor-at-a-time approach. At first, all single-component loads are applied in six load series. Then, three two-component load series are applied. They consist of the load pairs (N1, N2), (S1, S2), and (RM, AF). Afterwards, four three-component load series are applied. They consist of the combinations (N1, N2, AF), (S1, S2, AF), (N1, N2, RM), and (S1, S2, RM). In the next step, one four-component load series is applied. It is the load combination (N1, N2, S1, S2). Finally, two five-component load series are applied. They are the load combination (N1, N2, S1, S2, AF) and (N1, N2, S1, S2, RM). The maximum difference between loads of two subsequent data points of the load schedule is limited to 33 % of capacity. This constraint helps avoid unwanted load "jumps" in the load schedule that can have a negative impact on the performance of a calibration machine. Only loadings of the single- and two-component load series are loaded to 100 % of capacity. This approach was selected because it keeps the total number of calibration points to a reasonable limit while still allowing for the application of some of the more complex load combinations. Data from two of NASA's force balances is used to illustrate important characteristics of the proposed 2082-point calibration load schedule.
Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem
Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh
2014-01-01
This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359
Multi-objective group scheduling optimization integrated with preventive maintenance
NASA Astrophysics Data System (ADS)
Liao, Wenzhu; Zhang, Xiufang; Jiang, Min
2017-11-01
This article proposes a single-machine-based integration model to meet the requirements of production scheduling and preventive maintenance in group production. To describe the production for identical/similar and different jobs, this integrated model considers the learning and forgetting effects. Based on machine degradation, the deterioration effect is also considered. Moreover, perfect maintenance and minimal repair are adopted in this integrated model. The multi-objective of minimizing total completion time and maintenance cost is taken to meet the dual requirements of delivery date and cost. Finally, a genetic algorithm is developed to solve this optimization model, and the computation results demonstrate that this integrated model is effective and reliable.
NASA Astrophysics Data System (ADS)
Zhang, Xingong; Yin, Yunqiang; Wu, Chin-Chia
2017-01-01
There is a situation found in many manufacturing systems, such as steel rolling mills, fire fighting or single-server cycle-queues, where a job that is processed later consumes more time than that same job when processed earlier. The research finds that machine maintenance can improve the worsening of processing conditions. After maintenance activity, the machine will be restored. The maintenance duration is a positive and non-decreasing differentiable convex function of the total processing times of the jobs between maintenance activities. Motivated by this observation, the makespan and the total completion time minimization problems in the scheduling of jobs with non-decreasing rates of job processing time on a single machine are considered in this article. It is shown that both the makespan and the total completion time minimization problems are NP-hard in the strong sense when the number of maintenance activities is arbitrary, while the makespan minimization problem is NP-hard in the ordinary sense when the number of maintenance activities is fixed. If the deterioration rates of the jobs are identical and the maintenance duration is a linear function of the total processing times of the jobs between maintenance activities, then this article shows that the group balance principle is satisfied for the makespan minimization problem. Furthermore, two polynomial-time algorithms are presented for solving the makespan problem and the total completion time problem under identical deterioration rates, respectively.
Techniques for cash management in scheduling manufacturing operations
NASA Astrophysics Data System (ADS)
Morady Gohareh, Mehdy; Shams Gharneh, Naser; Ghasemy Yaghin, Reza
2017-06-01
The objective in traditional scheduling is usually time based. Minimizing the makespan, total flow times, total tardi costs, etc. are instances of these objectives. In manufacturing, processing each job entails a cost paying and price receiving. Thus, the objective should include some notion of managing the flow of cash. We have defined two new objectives: maximization of average and minimum available cash. For single machine scheduling, it is demonstrated that scheduling jobs in decreasing order of profit ratios maximizes the former and improves productivity. Moreover, scheduling jobs in increasing order of costs and breaking ties in decreasing order of prices maximizes the latter and creates protection against financial instability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... machine cards not available from Federal Supply Schedule contracts. 101-26.509-2 Section 101-26.509-2... Programs § 101-26.509-2 Requisitioning tabulating machine cards not available from Federal Supply Schedule contracts. (a) Requisitions for tabulating machine cards covered by Federal Supply Schedule contracts which...
Zhao, Chuan-Li; Hsu, Hua-Feng
2014-01-01
This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n 4) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n 3) time by providing a dynamic programming algorithm. PMID:25258727
Zhao, Chuan-Li; Hsu, Chou-Jung; Hsu, Hua-Feng
2014-01-01
This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n(4)) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n(3)) time by providing a dynamic programming algorithm.
Research on Production Scheduling System with Bottleneck Based on Multi-agent
NASA Astrophysics Data System (ADS)
Zhenqiang, Bao; Weiye, Wang; Peng, Wang; Pan, Quanke
Aimed at the imbalance problem of resource capacity in Production Scheduling System, this paper uses Production Scheduling System based on multi-agent which has been constructed, and combines the dynamic and autonomous of Agent; the bottleneck problem in the scheduling is solved dynamically. Firstly, this paper uses Bottleneck Resource Agent to find out the bottleneck resource in the production line, analyses the inherent mechanism of bottleneck, and describes the production scheduling process based on bottleneck resource. Bottleneck Decomposition Agent harmonizes the relationship of job's arrival time and transfer time in Bottleneck Resource Agent and Non-Bottleneck Resource Agents, therefore, the dynamic scheduling problem is simplified as the single machine scheduling of each resource which takes part in the scheduling. Finally, the dynamic real-time scheduling problem is effectively solved in Production Scheduling System.
Single product lot-sizing on unrelated parallel machines with non-decreasing processing times
NASA Astrophysics Data System (ADS)
Eremeev, A.; Kovalyov, M.; Kuznetsov, P.
2018-01-01
We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.
NASA Astrophysics Data System (ADS)
Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.
2017-08-01
This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
NASA Astrophysics Data System (ADS)
Amallynda, I.; Santosa, B.
2017-11-01
This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.
NASA Astrophysics Data System (ADS)
Gu, Cunchang; Mu, Yundong
2013-03-01
In this paper, we consider a single machine on-line scheduling problem with the special chains precedence and delivery time. All jobs arrive over time. The chains chainsi arrive at time ri , it is known that the processing and delivery time of each job on the chain satisfy one special condition CD a forehand: if the job J(i)j is the predecessor of the job J(i)k on the chain chaini, then they satisfy p(i)j = p(i)k = p >= qj >= qk , i = 1,2, ---,n , where pj and qj denote the processing time and the delivery time of the job Jj respectively. Obviously, if the arrival jobs have no chains precedence, it shows that the length of the corresponding chain is 1. The objective is to minimize the time by which all jobs have been delivered. We provide an on-line algorithm with a competitive ratio of √2 , and the result is the best possible.
Full glowworm swarm optimization algorithm for whole-set orders scheduling in single machine.
Yu, Zhang; Yang, Xiaomei
2013-01-01
By analyzing the characteristics of whole-set orders problem and combining the theory of glowworm swarm optimization, a new glowworm swarm optimization algorithm for scheduling is proposed. A new hybrid-encoding schema combining with two-dimensional encoding and random-key encoding is given. In order to enhance the capability of optimal searching and speed up the convergence rate, the dynamical changed step strategy is integrated into this algorithm. Furthermore, experimental results prove its feasibility and efficiency.
Planning for rover opportunistic science
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.; Estlin, Tara; Forest, Fisher; Chouinard, Caroline; Castano, Rebecca; Anderson, Robert C.
2004-01-01
The Mars Exploration Rover Spirit recently set a record for the furthest distance traveled in a single sol on Mars. Future planetary exploration missions are expected to use even longer drives to position rovers in areas of high scientific interest. This increase provides the potential for a large rise in the number of new science collection opportunities as the rover traverses the Martian surface. In this paper, we describe the OASIS system, which provides autonomous capabilities for dynamically identifying and pursuing these science opportunities during longrange traverses. OASIS uses machine learning and planning and scheduling techniques to address this goal. Machine learning techniques are applied to analyze data as it is collected and quickly determine new science gods and priorities on these goals. Planning and scheduling techniques are used to alter the behavior of the rover so that new science measurements can be performed while still obeying resource and other mission constraints. We will introduce OASIS and describe how planning and scheduling algorithms support opportunistic science.
NASA Astrophysics Data System (ADS)
Wang, Li-Chih; Chen, Yin-Yann; Chen, Tzu-Li; Cheng, Chen-Yang; Chang, Chin-Wei
2014-10-01
This paper studies a solar cell industry scheduling problem, which is similar to traditional hybrid flowshop scheduling (HFS). In a typical HFS problem, the allocation of machine resources for each order should be scheduled in advance. However, the challenge in solar cell manufacturing is the number of machines that can be adjusted dynamically to complete the job. An optimal production scheduling model is developed to explore these issues, considering the practical characteristics, such as hybrid flowshop, parallel machine system, dedicated machines, sequence independent job setup times and sequence dependent job setup times. The objective of this model is to minimise the makespan and to decide the processing sequence of the orders/lots in each stage, lot-splitting decisions for the orders and the number of machines used to satisfy the demands in each stage. From the experimental results, lot-splitting has significant effect on shortening the makespan, and the improvement effect is influenced by the processing time and the setup time of orders. Therefore, the threshold point to improve the makespan can be identified. In addition, the model also indicates that more lot-splitting approaches, that is, the flexibility of allocating orders/lots to machines is larger, will result in a better scheduling performance.
Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Lin, C. T.
1989-01-01
The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.
Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems.
Andrade, G; Ferreira, R; Teodoro, George; Rocha, Leonardo; Saltz, Joel H; Kurc, Tahsin
2014-10-01
High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales.
Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems
Andrade, G.; Ferreira, R.; Teodoro, George; Rocha, Leonardo; Saltz, Joel H.; Kurc, Tahsin
2015-01-01
High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales. PMID:26640423
Two-machine flow shop scheduling integrated with preventive maintenance planning
NASA Astrophysics Data System (ADS)
Wang, Shijin; Liu, Ming
2016-02-01
This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.
Due-Window Assignment Scheduling with Variable Job Processing Times
Wu, Yu-Bin
2015-01-01
We consider a common due-window assignment scheduling problem jobs with variable job processing times on a single machine, where the processing time of a job is a function of its position in a sequence (i.e., learning effect) or its starting time (i.e., deteriorating effect). The problem is to determine the optimal due-windows, and the processing sequence simultaneously to minimize a cost function includes earliness, tardiness, the window location, window size, and weighted number of tardy jobs. We prove that the problem can be solved in polynomial time. PMID:25918745
Scheduling job shop - A case study
NASA Astrophysics Data System (ADS)
Abas, M.; Abbas, A.; Khan, W. A.
2016-08-01
The scheduling in job shop is important for efficient utilization of machines in the manufacturing industry. There are number of algorithms available for scheduling of jobs which depend on machines tools, indirect consumables and jobs which are to be processed. In this paper a case study is presented for scheduling of jobs when parts are treated on available machines. Through time and motion study setup time and operation time are measured as total processing time for variety of products having different manufacturing processes. Based on due dates different level of priority are assigned to the jobs and the jobs are scheduled on the basis of priority. In view of the measured processing time, the times for processing of some new jobs are estimated and for efficient utilization of the machines available an algorithm is proposed and validated.
Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines
Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing
2014-01-01
m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933
Prediction based proactive thermal virtual machine scheduling in green clouds.
Kinger, Supriya; Kumar, Rajesh; Sharma, Anju
2014-01-01
Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.
Multiple-variable neighbourhood search for the single-machine total weighted tardiness problem
NASA Astrophysics Data System (ADS)
Chung, Tsui-Ping; Fu, Qunjie; Liao, Ching-Jong; Liu, Yi-Ting
2017-07-01
The single-machine total weighted tardiness (SMTWT) problem is a typical discrete combinatorial optimization problem in the scheduling literature. This problem has been proved to be NP hard and thus provides a challenging area for metaheuristics, especially the variable neighbourhood search algorithm. In this article, a multiple variable neighbourhood search (m-VNS) algorithm with multiple neighbourhood structures is proposed to solve the problem. Special mechanisms named matching and strengthening operations are employed in the algorithm, which has an auto-revising local search procedure to explore the solution space beyond local optimality. Two aspects, searching direction and searching depth, are considered, and neighbourhood structures are systematically exchanged. Experimental results show that the proposed m-VNS algorithm outperforms all the compared algorithms in solving the SMTWT problem.
Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds
Kinger, Supriya; Kumar, Rajesh; Sharma, Anju
2014-01-01
Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962
On-Line Scheduling of Parallel Machines
1990-11-01
machine without losing any work; this is referred to as the preemptive model. In contrast to the nonpreemptive model which we have considered in this paper...that there exists no schedule of length d. The 2-relaxed decision procedure is as follows. Put each job into the queue of the slowest machine Mk such...in their queues . If a machine’s queue is empty it takes jobs to process from the queue of the first machine that is slower than it and that has a
Proposed algorithm to improve job shop production scheduling using ant colony optimization method
NASA Astrophysics Data System (ADS)
Pakpahan, Eka KA; Kristina, Sonna; Setiawan, Ari
2017-12-01
This paper deals with the determination of job shop production schedule on an automatic environment. On this particular environment, machines and material handling system are integrated and controlled by a computer center where schedule were created and then used to dictate the movement of parts and the operations at each machine. This setting is usually designed to have an unmanned production process for a specified interval time. We consider here parts with various operations requirement. Each operation requires specific cutting tools. These parts are to be scheduled on machines each having identical capability, meaning that each machine is equipped with a similar set of cutting tools therefore is capable of processing any operation. The availability of a particular machine to process a particular operation is determined by the remaining life time of its cutting tools. We proposed an algorithm based on the ant colony optimization method and embedded them on matlab software to generate production schedule which minimize the total processing time of the parts (makespan). We test the algorithm on data provided by real industry and the process shows a very short computation time. This contributes a lot to the flexibility and timelines targeted on an automatic environment.
49 CFR 214.533 - Schedule of repairs subject to availability of parts.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Maintenance Machines and Hi-Rail Vehicles § 214.533 Schedule of repairs subject to availability of parts. (a... 49 Transportation 4 2011-10-01 2011-10-01 false Schedule of repairs subject to availability of... maintenance machine or a hi-rail vehicle by the end of the next business day following the report of the...
49 CFR 214.533 - Schedule of repairs subject to availability of parts.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Maintenance Machines and Hi-Rail Vehicles § 214.533 Schedule of repairs subject to availability of parts. (a... maintenance machine or a hi-rail vehicle by the end of the next business day following the report of the... maintenance machine or hi-rail vehicle within seven calendar days after receiving the necessary part. The...
Approximation algorithms for scheduling unrelated parallel machines with release dates
NASA Astrophysics Data System (ADS)
Avdeenko, T. V.; Mesentsev, Y. A.; Estraykh, I. V.
2017-01-01
In this paper we propose approaches to optimal scheduling of unrelated parallel machines with release dates. One approach is based on the scheme of dynamic programming modified with adaptive narrowing of search domain ensuring its computational effectiveness. We discussed complexity of the exact schedules synthesis and compared it with approximate, close to optimal, solutions. Also we explain how the algorithm works for the example of two unrelated parallel machines and five jobs with release dates. Performance results that show the efficiency of the proposed approach have been given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, M.; Grimshaw, A.
1996-12-31
The Legion project at the University of Virginia is an architecture for designing and building system services that provide the illusion of a single virtual machine to users, a virtual machine that provides secure shared object and shared name spaces, application adjustable fault-tolerance, improved response time, and greater throughput. Legion targets wide area assemblies of workstations, supercomputers, and parallel supercomputers, Legion tackles problems not solved by existing workstation based parallel processing tools; the system will enable fault-tolerance, wide area parallel processing, inter-operability, heterogeneity, a single global name space, protection, security, efficient scheduling, and comprehensive resource management. This paper describes themore » core Legion object model, which specifies the composition and functionality of Legion`s core objects-those objects that cooperate to create, locate, manage, and remove objects in the Legion system. The object model facilitates a flexible extensible implementation, provides a single global name space, grants site autonomy to participating organizations, and scales to millions of sites and trillions of objects.« less
Single-machine group scheduling problems with deteriorating and learning effect
NASA Astrophysics Data System (ADS)
Xingong, Zhang; Yong, Wang; Shikun, Bai
2016-07-01
The concepts of deteriorating jobs and learning effects have been individually studied in many scheduling problems. However, most studies considering the deteriorating and learning effects ignore the fact that production efficiency can be increased by grouping various parts and products with similar designs and/or production processes. This phenomenon is known as 'group technology' in the literature. In this paper, a new group scheduling model with deteriorating and learning effects is proposed, where learning effect depends not only on job position, but also on the position of the corresponding job group; deteriorating effect depends on its starting time of the job. This paper shows that the makespan and the total completion time problems remain polynomial optimal solvable under the proposed model. In addition, a polynomial optimal solution is also presented to minimise the maximum lateness problem under certain agreeable restriction.
Bidding-based autonomous process planning and scheduling
NASA Astrophysics Data System (ADS)
Gu, Peihua; Balasubramanian, Sivaram; Norrie, Douglas H.
1995-08-01
Improving productivity through computer integrated manufacturing systems (CIMS) and concurrent engineering requires that the islands of automation in an enterprise be completely integrated. The first step in this direction is to integrate design, process planning, and scheduling. This can be achieved through a bidding-based process planning approach. The product is represented in a STEP model with detailed design and administrative information including design specifications, batch size, and due dates. Upon arrival at the manufacturing facility, the product registered in the shop floor manager which is essentially a coordinating agent. The shop floor manager broadcasts the product's requirements to the machines. The shop contains autonomous machines that have knowledge about their functionality, capabilities, tooling, and schedule. Each machine has its own process planner and responds to the product's request in a different way that is consistent with its capabilities and capacities. When more than one machine offers certain process(es) for the same requirements, they enter into negotiation. Based on processing time, due date, and cost, one of the machines wins the contract. The successful machine updates its schedule and advises the product to request raw material for processing. The concept was implemented using a multi-agent system with the task decomposition and planning achieved through contract nets. The examples are included to illustrate the approach.
NASA Astrophysics Data System (ADS)
Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu
2015-12-01
For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.
NASA Astrophysics Data System (ADS)
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
NASA Astrophysics Data System (ADS)
Setiawan, A.; Wangsaputra, R.; Martawirya, Y. Y.; Halim, A. H.
2016-02-01
This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule.
Secure Autonomous Automated Scheduling (SAAS). Rev. 1.1
NASA Technical Reports Server (NTRS)
Walke, Jon G.; Dikeman, Larry; Sage, Stephen P.; Miller, Eric M.
2010-01-01
This report describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the UK-DMC, is used as the space-based sensor. The UK-DMC's availability is determined via machine-to-machine communications using SSTL's mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL's and Universal Space Network's (USN) ground assets. The availability and scheduling of USN's assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards
Scheduling algorithm for flow shop with two batch-processing machines and arbitrary job sizes
NASA Astrophysics Data System (ADS)
Cheng, Bayi; Yang, Shanlin; Hu, Xiaoxuan; Li, Kai
2014-03-01
This article considers the problem of scheduling two batch-processing machines in flow shop where the jobs have arbitrary sizes and the machines have limited capacity. The jobs are processed in batches and the total size of jobs in each batch cannot exceed the machine capacity. Once a batch is being processed, no interruption is allowed until all the jobs in it are completed. The problem of minimising makespan is NP-hard in the strong sense. First, we present a mathematical model of the problem using integer programme. We show the scale of feasible solutions of the problem and provide optimality properties. Then, we propose a polynomial time algorithm with running time in O(nlogn). The jobs are first assigned in feasible batches and then scheduled on machines. For the general case, we prove that the proposed algorithm has a performance guarantee of 4. For the special case where the processing times of each job on the two machines satisfy p 1 j = ap 2 j , the performance guarantee is ? for a > 0.
Manipulating Tabu List to Handle Machine Breakdowns in Job Shop Scheduling Problems
NASA Astrophysics Data System (ADS)
Nababan, Erna Budhiarti; SalimSitompul, Opim
2011-06-01
Machine breakdowns in a production schedule may occur on a random basis that make the well-known hard combinatorial problem of Job Shop Scheduling Problems (JSSP) becomes more complex. One of popular techniques used to solve the combinatorial problems is Tabu Search. In this technique, moves that will be not allowed to be revisited are retained in a tabu list in order to avoid in gaining solutions that have been obtained previously. In this paper, we propose an algorithm to employ a second tabu list to keep broken machines, in addition to the tabu list that keeps the moves. The period of how long the broken machines will be kept on the list is categorized using fuzzy membership function. Our technique are tested to the benchmark data of JSSP available on the OR library. From the experiment, we found that our algorithm is promising to help a decision maker to face the event of machine breakdowns.
A Solution Method of Scheduling Problem with Worker Allocation by a Genetic Algorithm
NASA Astrophysics Data System (ADS)
Osawa, Akira; Ida, Kenichi
In a scheduling problem with worker allocation (SPWA) proposed by Iima et al, the worker's skill level to each machine is all the same. However, each worker has a different skill level for each machine in the real world. For that reason, we propose a new model of SPWA in which a worker has the different skill level to each machine. To solve the problem, we propose a new GA for SPWA consisting of the following new three procedures, shortening of idle time, modifying infeasible solution to feasible solution, and a new selection method for GA. The effectiveness of the proposed algorithm is clarified by numerical experiments using benchmark problems for job-shop scheduling.
Job shop scheduling problem with late work criterion
NASA Astrophysics Data System (ADS)
Piroozfard, Hamed; Wong, Kuan Yew
2015-05-01
Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.
NASA Astrophysics Data System (ADS)
Sembiring, N.; Nasution, A. H.
2018-02-01
Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.
Scheduling of flow shop problems on 3 machines in fuzzy environment with double transport facility
NASA Astrophysics Data System (ADS)
Sathish, Shakeela; Ganesan, K.
2016-06-01
Flow shop scheduling is a decision making problem in production and manufacturing field which has a significant impact on the performance of an organization. When the machines on which jobs are to be processed are placed at different places, the transportation time plays a significant role in production. Further two different transport agents where 1st takes the job from 1st machine to 2nd machine and then returns back to the first machine and the 2nd takes the job from 2nd machine to 3rd machine and then returns back to the 2nd machine are also considered. We propose a method to minimize the total make span; without converting the fuzzy processing time to classical numbers by using a new type of fuzzy arithmetic and a fuzzy ranking method. A numerical example is provided to explain the proposed method.
Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems
2015-05-01
of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC
Magnetospheric MultiScale (MMS) System Manager
NASA Technical Reports Server (NTRS)
Schiff, Conrad; Maher, Francis Alfred; Henely, Sean Philip; Rand, David
2014-01-01
The Magnetospheric MultiScale (MMS) mission is an ambitious NASA space science mission in which 4 spacecraft are flown in tight formation about a highly elliptical orbit. Each spacecraft has multiple instruments that measure particle and field compositions in the Earths magnetosphere. By controlling the members relative motion, MMS can distinguish temporal and spatial fluctuations in a way that a single spacecraft cannot.To achieve this control, 2 sets of four maneuvers, distributed evenly across the spacecraft must be performed approximately every 14 days. Performing a single maneuver on an individual spacecraft is usually labor intensive and the complexity becomes clearly increases with four. As a result, the MMS flight dynamics team turned to the System Manager to put the routine or error-prone under machine control freeing the analysts for activities that require human judgment.The System Manager is an expert system that is capable of handling operations activities associated with performing MMS maneuvers. As an expert system, it can work off a known schedule, launching jobs based on a one-time occurrence or on a set reoccurring schedule. It is also able to detect situational changes and use event-driven programming to change schedules, adapt activities, or call for help.
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
NASA Astrophysics Data System (ADS)
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
NASA Astrophysics Data System (ADS)
Zhadanovsky, Boris; Sinenko, Sergey
2018-03-01
Economic indicators of construction work, particularly in high-rise construction, are directly related to the choice of optimal number of machines. The shortage of machinery makes it impossible to complete the construction & installation work on scheduled time. Rates of performance of construction & installation works and labor productivity during high-rise construction largely depend on the degree of provision of construction project with machines (level of work mechanization). During calculation of the need for machines in construction projects, it is necessary to ensure that work is completed on scheduled time, increased level of complex mechanization, increased productivity and reduction of manual work, and improved usage and maintenance of machine fleet. The selection of machines and determination of their numbers should be carried out by using formulas presented in this work.
A survey of planning and scheduling research at the NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Zweben, Monte
1989-01-01
NASA Ames Research Center has a diverse program in planning and scheduling. Some research projects as well as some applications are highlighted. Topics addressed include machine learning techniques, action representations and constraint-based scheduling systems. The applications discussed are planetary rovers, Hubble Space Telescope scheduling, and Pioneer Venus orbit scheduling.
Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.
Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911
Satellite antenna management system and method
NASA Technical Reports Server (NTRS)
Leath, Timothy T (Inventor); Azzolini, John D (Inventor)
1999-01-01
The antenna management system and method allow a satellite to communicate with a ground station either directly or by an intermediary of a second satellite, thus permitting communication even when the satellite is not within range of the ground station. The system and method employ five major software components, which are the control and initialization module, the command and telemetry handler module, the contact schedule processor module, the contact state machining module, and the telemetry state machine module. The control and initialization module initializes the system and operates the main control cycle, in which the other modules are called. The command and telemetry handler module handles communication to and from the ground station. The contact scheduler processor module handles the contact entry schedules to allow scheduling of contacts with the second satellite. The contact and telemetry state machine modules handle the various states of the satellite in beginning, maintaining and ending contact with the second satellite and in beginning, maintaining and ending communication with the satellite.
NASA Astrophysics Data System (ADS)
Bai, Danyu
2015-08-01
This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.
Optimization-based manufacturing scheduling with multiple resources and setup requirements
NASA Astrophysics Data System (ADS)
Chen, Dong; Luh, Peter B.; Thakur, Lakshman S.; Moreno, Jack, Jr.
1998-10-01
The increasing demand for on-time delivery and low price forces manufacturer to seek effective schedules to improve coordination of multiple resources and to reduce product internal costs associated with labor, setup and inventory. This study describes the design and implementation of a scheduling system for J. M. Product Inc. whose manufacturing is characterized by the need to simultaneously consider machines and operators while an operator may attend several operations at the same time, and the presence of machines requiring significant setup times. The scheduling problem with these characteristics are typical for many manufacturers, very difficult to be handled, and have not been adequately addressed in the literature. In this study, both machine and operators are modeled as resources with finite capacities to obtain efficient coordination between them, and an operator's time can be shared by several operations at the same time to make full use of the operator. Setups are explicitly modeled following our previous work, with additional penalties on excessive setups to reduce setup costs and avoid possible scraps. An integer formulation with a separable structure is developed to maximize on-time delivery of products, low inventory and small number of setups. Within the Lagrangian relaxation framework, the problem is decomposed into individual subproblems that are effectively solved by using dynamic programming with additional penalties embedded in state transitions. Heuristics is then developed to obtain a feasible schedule following on our previous work with new mechanism to satisfy operator capacity constraints. The method has been implemented using the object-oriented programming language C++ with a user-friendly interface, and numerical testing shows that the method generates high quality schedules in a timely fashion. Through simultaneous consideration of machines and operators, machines and operators are well coordinated to facilitate the smooth flow of parts through the system. The explicit modeling of setups and the associated penalties let parts with same setup requirements clustered together to avoid excessive setups.
NASA Astrophysics Data System (ADS)
Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.
2016-02-01
In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%.
A survey of planning and scheduling research at the NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Zweben, Monte
1988-01-01
NASA Ames Research Center has a diverse program in planning and scheduling. This paper highlights some of our research projects as well as some of our applications. Topics addressed include machine learning techniques, action representations and constraint-based scheduling systems. The applications discussed are planetary rovers, Hubble Space Telescope scheduling, and Pioneer Venus orbit scheduling.
21 CFR 1310.16 - Exemptions for certain scheduled listed chemical products.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 9 2011-04-01 2011-04-01 false Exemptions for certain scheduled listed chemical... RECORDS AND REPORTS OF LISTED CHEMICALS AND CERTAIN MACHINES § 1310.16 Exemptions for certain scheduled listed chemical products. (a) Upon the application of a manufacturer of a scheduled listed chemical...
32 CFR 701.53 - FOIA fee schedule.
Code of Federal Regulations, 2014 CFR
2014-07-01
... human time) and machine time. (1) Human time. Human time is all the time spent by humans performing the...) Machine time. Machine time involves only direct costs of the central processing unit (CPU), input/output... exist to calculate CPU time, no machine costs can be passed on to the requester. When CPU calculations...
32 CFR 701.53 - FOIA fee schedule.
Code of Federal Regulations, 2012 CFR
2012-07-01
... human time) and machine time. (1) Human time. Human time is all the time spent by humans performing the...) Machine time. Machine time involves only direct costs of the central processing unit (CPU), input/output... exist to calculate CPU time, no machine costs can be passed on to the requester. When CPU calculations...
32 CFR 701.53 - FOIA fee schedule.
Code of Federal Regulations, 2013 CFR
2013-07-01
... human time) and machine time. (1) Human time. Human time is all the time spent by humans performing the...) Machine time. Machine time involves only direct costs of the central processing unit (CPU), input/output... exist to calculate CPU time, no machine costs can be passed on to the requester. When CPU calculations...
Constraint-Based Scheduling System
NASA Technical Reports Server (NTRS)
Zweben, Monte; Eskey, Megan; Stock, Todd; Taylor, Will; Kanefsky, Bob; Drascher, Ellen; Deale, Michael; Daun, Brian; Davis, Gene
1995-01-01
Report describes continuing development of software for constraint-based scheduling system implemented eventually on massively parallel computer. Based on machine learning as means of improving scheduling. Designed to learn when to change search strategy by analyzing search progress and learning general conditions under which resource bottleneck occurs.
Car painting process scheduling with harmony search algorithm
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Maiyasya, A.; Purnamawati, S.; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.
2018-02-01
Automotive painting program in the process of painting the car body by using robot power, making efficiency in the production system. Production system will be more efficient if pay attention to scheduling of car order which will be done by considering painting body shape of car. Flow shop scheduling is a scheduling model in which the job-job to be processed entirely flows in the same product direction / path. Scheduling problems often arise if there are n jobs to be processed on the machine, which must be specified which must be done first and how to allocate jobs on the machine to obtain a scheduled production process. Harmony Search Algorithm is a metaheuristic optimization algorithm based on music. The algorithm is inspired by observations that lead to music in search of perfect harmony. This musical harmony is in line to find optimal in the optimization process. Based on the tests that have been done, obtained the optimal car sequence with minimum makespan value.
Code of Federal Regulations, 2011 CFR
2011-07-01
... approved, accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. 18.95 Section 18.95..., accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. Machines for which field approval... 2D, 2E, 2F, or 2G, shall be approved following a determination by the electrical representative that...
Code of Federal Regulations, 2013 CFR
2013-07-01
... approved, accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. 18.95 Section 18.95..., accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. Machines for which field approval... 2D, 2E, 2F, or 2G, shall be approved following a determination by the electrical representative that...
Code of Federal Regulations, 2010 CFR
2010-07-01
... approved, accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. 18.95 Section 18.95..., accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. Machines for which field approval... 2D, 2E, 2F, or 2G, shall be approved following a determination by the electrical representative that...
Code of Federal Regulations, 2014 CFR
2014-07-01
... approved, accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. 18.95 Section 18.95..., accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. Machines for which field approval... 2D, 2E, 2F, or 2G, shall be approved following a determination by the electrical representative that...
Code of Federal Regulations, 2012 CFR
2012-07-01
... approved, accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. 18.95 Section 18.95..., accepted or certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G. Machines for which field approval... 2D, 2E, 2F, or 2G, shall be approved following a determination by the electrical representative that...
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
2016-09-09
prenticeship Scheduling (COVAS), which performs ma- chine learning using human expert demonstration, in conjunction with optimization, to automatically and ef...ficiently produce optimal solutions to challenging real- world scheduling problems. COVAS first learns a policy from human scheduling demonstration via...apprentice- ship learning , then uses this initial solution to provide a tight bound on the value of the optimal solution, thereby substantially
1990-10-01
to economic, technological, spatial or logistic concerns, or involve training, man-machine interfaces, or integration into existing systems. Once the...probabilistic reasoning, mixed analysis- and simulation-oriented, mixed computation- and communication-oriented, nonpreemptive static priority...scheduling base, nonrandomized, preemptive static priority scheduling base, randomized, simulation-oriented, and static scheduling base. The selection of both
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dardalis, Dimitrios
2013-12-31
This report describes the work on converting a 4 cylinder Cummins ISB engine into a single cylinder Rotating Liner Engine functioning prototype that can be used to measure the friction benefits of rotating the cylinder liner in a high pressure compression ignition engine. A similar baseline engine was also prepared, and preliminary testing was done. Even though the fabrication of the single cylinder prototype was behind schedule due to machine shop delays, the fundamental soundness of the design elements are proven, and the engine has successfully functioned. However, the testing approach of the two engines, as envisioned by the originalmore » proposal, proved impossible due to torsional vibration resonance caused by the single active piston. A new approach for proper testing has been proposed,« less
Construction machine control guidance implementation strategy.
DOT National Transportation Integrated Search
2010-07-01
Machine Controlled Guidance (MCG) technology may be used in roadway and bridge construction to improve construction efficiencies, potentially resulting in reduced project costs and accelerated schedules. The technology utilizes a Global Positioning S...
Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.
NASA Astrophysics Data System (ADS)
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
NASA Astrophysics Data System (ADS)
Konno, Yohko; Suzuki, Keiji
This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.
Meta-RaPS Algorithm for the Aerial Refueling Scheduling Problem
NASA Technical Reports Server (NTRS)
Kaplan, Sezgin; Arin, Arif; Rabadi, Ghaith
2011-01-01
The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on multiple tankers (machines). ARSP assumes that jobs have different release times and due dates, The total weighted tardiness is used to evaluate schedule's quality. Therefore, ARSP can be modeled as a parallel machine scheduling with release limes and due dates to minimize the total weighted tardiness. Since ARSP is NP-hard, it will be more appropriate to develop a pproimate or heuristic algorithm to obtain solutions in reasonable computation limes. In this paper, Meta-Raps-ATC algorithm is implemented to create high quality solutions. Meta-RaPS (Meta-heuristic for Randomized Priority Search) is a recent and promising meta heuristic that is applied by introducing randomness to a construction heuristic. The Apparent Tardiness Rule (ATC), which is a good rule for scheduling problems with tardiness objective, is used to construct initial solutions which are improved by an exchanging operation. Results are presented for generated instances.
30 CFR 75.209 - Automated Temporary Roof Support (ATRS) systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... paragraphs (b) and (c) of this section, an ATRS system shall be used with roof bolting machines and continuous-mining machines with integral roof bolters operated in a working section. The requirements of this paragraph shall be met according to the following schedule: (1) All new machines ordered after March 28...
30 CFR 75.209 - Automated Temporary Roof Support (ATRS) systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... paragraphs (b) and (c) of this section, an ATRS system shall be used with roof bolting machines and continuous-mining machines with integral roof bolters operated in a working section. The requirements of this paragraph shall be met according to the following schedule: (1) All new machines ordered after March 28...
30 CFR 75.209 - Automated Temporary Roof Support (ATRS) systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... paragraphs (b) and (c) of this section, an ATRS system shall be used with roof bolting machines and continuous-mining machines with integral roof bolters operated in a working section. The requirements of this paragraph shall be met according to the following schedule: (1) All new machines ordered after March 28...
Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization
NASA Technical Reports Server (NTRS)
Jones, James Patton; Nitzberg, Bill
1999-01-01
The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.
Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment
NASA Astrophysics Data System (ADS)
Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.
2017-03-01
Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.
NASA Astrophysics Data System (ADS)
Sembiring, N.; Panjaitan, N.; Saragih, A. F.
2018-02-01
PT. XYZ is a manufacturing company that produces fresh fruit bunches (FFB) to Crude Palm Oil (CPO) and Palm Kernel Oil (PKO). PT. XYZ consists of six work stations: receipt station, sterilizing station, thressing station, pressing station, clarification station, and kernelery station. So far, the company is still implementing corrective maintenance maintenance system for production machines where the machine repair is done after damage occurs. Problems at PT. XYZ is the absence of scheduling engine maintenance in a planned manner resulting in the engine often damaged which can disrupt the smooth production. Another factor that is the problem in this research is the kernel station environment that becomes less convenient for operators such as there are machines and equipment not used in the production area, slippery, muddy, scattered fibers, incomplete use of PPE, and lack of employee discipline. The most commonly damaged machine is in the seed processing station (kernel station) which is cake breaker conveyor machine. The solution of this problem is to propose a schedule plan for maintenance of the machine by using the method of reliability centered maintenance and also the application of 5S. The result of the application of Reliability Centered maintenance method is obtained four components that must be treated scheduled (time directed), namely: for bearing component is 37 days, gearbox component is 97 days, CBC pen component is 35 days and conveyor pedal component is 32 days While after identification the application of 5S obtained the proposed corporate environmental improvement measures in accordance with the principles of 5S where unused goods will be moved from the production area, grouping goods based on their use, determining the procedure of cleaning the production area, conducting inspection in the use of PPE, and making 5S slogans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novikov, V.
1991-05-01
The U.S. Army's detailed equipment decontamination process is a stochastic flow shop which has N independent non-identical jobs (vehicles) which have overlapping processing times. This flow shop consists of up to six non-identical machines (stations). With the exception of one station, the processing times of the jobs are random variables. Based on an analysis of the processing times, the jobs for the 56 Army heavy division companies were scheduled according to the best shortest expected processing time - longest expected processing time (SEPT-LEPT) sequence. To assist in this scheduling the Gap Comparison Heuristic was developed to select the best SEPT-LEPTmore » schedule. This schedule was then used in balancing the detailed equipment decon line in order to find the best possible site configuration subject to several constraints. The detailed troop decon line, in which all jobs are independent and identically distributed, was then balanced. Lastly, an NBC decon optimization computer program was developed using the scheduling and line balancing results. This program serves as a prototype module for the ANBACIS automated NBC decision support system.... Decontamination, Stochastic flow shop, Scheduling, Stochastic scheduling, Minimization of the makespan, SEPT-LEPT Sequences, Flow shop line balancing, ANBACIS.« less
Multiagent scheduling method with earliness and tardiness objectives in flexible job shops.
Wu, Zuobao; Weng, Michael X
2005-04-01
Flexible job-shop scheduling problems are an important extension of the classical job-shop scheduling problems and present additional complexity. Such problems are mainly due to the existence of a considerable amount of overlapping capacities with modern machines. Classical scheduling methods are generally incapable of addressing such capacity overlapping. We propose a multiagent scheduling method with job earliness and tardiness objectives in a flexible job-shop environment. The earliness and tardiness objectives are consistent with the just-in-time production philosophy which has attracted significant attention in both industry and academic community. A new job-routing and sequencing mechanism is proposed. In this mechanism, two kinds of jobs are defined to distinguish jobs with one operation left from jobs with more than one operation left. Different criteria are proposed to route these two kinds of jobs. Job sequencing enables to hold a job that may be completed too early. Two heuristic algorithms for job sequencing are developed to deal with these two kinds of jobs. The computational experiments show that the proposed multiagent scheduling method significantly outperforms the existing scheduling methods in the literature. In addition, the proposed method is quite fast. In fact, the simulation time to find a complete schedule with over 2000 jobs on ten machines is less than 1.5 min.
Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less
A hybrid dynamic harmony search algorithm for identical parallel machines scheduling
NASA Astrophysics Data System (ADS)
Chen, Jing; Pan, Quan-Ke; Wang, Ling; Li, Jun-Qing
2012-02-01
In this article, a dynamic harmony search (DHS) algorithm is proposed for the identical parallel machines scheduling problem with the objective to minimize makespan. First, an encoding scheme based on a list scheduling rule is developed to convert the continuous harmony vectors to discrete job assignments. Second, the whole harmony memory (HM) is divided into multiple small-sized sub-HMs, and each sub-HM performs evolution independently and exchanges information with others periodically by using a regrouping schedule. Third, a novel improvisation process is applied to generate a new harmony by making use of the information of harmony vectors in each sub-HM. Moreover, a local search strategy is presented and incorporated into the DHS algorithm to find promising solutions. Simulation results show that the hybrid DHS (DHS_LS) is very competitive in comparison to its competitors in terms of mean performance and average computational time.
Machine learning in updating predictive models of planning and scheduling transportation projects
DOT National Transportation Integrated Search
1997-01-01
A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...
30 CFR 18.97 - Inspection of machines; minimum requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... all electrical components for materials, workmanship, design, and construction; (2) Examination of all components of the machine which have been approved or certified under Bureau of Mines Schedule 2D, 2E, 2F, or...
30 CFR 18.97 - Inspection of machines; minimum requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... all electrical components for materials, workmanship, design, and construction; (2) Examination of all components of the machine which have been approved or certified under Bureau of Mines Schedule 2D, 2E, 2F, or...
30 CFR 18.97 - Inspection of machines; minimum requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... all electrical components for materials, workmanship, design, and construction; (2) Examination of all components of the machine which have been approved or certified under Bureau of Mines Schedule 2D, 2E, 2F, or...
Virtual Mission Operations of Remote Sensors With Rapid Access To and From Space
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Stewart, Dave; Walke, Jon; Dikeman, Larry; Sage, Steven; Miller, Eric; Northam, James; Jackson, Chris; Taylor, John; Lynch, Scott;
2010-01-01
This paper describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the United Kingdom Disaster Monitoring Constellation (UK-DMC), is used as the space-based sensor. The UK-DMC s availability is determined via machine-to-machine communications using SSTL s mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL s and Universal Space Network s (USN) ground assets. The availability and scheduling of USN s assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards.
Research on schedulers for astronomical observatories
NASA Astrophysics Data System (ADS)
Colome, Josep; Colomer, Pau; Guàrdia, Josep; Ribas, Ignasi; Campreciós, Jordi; Coiffard, Thierry; Gesa, Lluis; Martínez, Francesc; Rodler, Florian
2012-09-01
The main task of a scheduler applied to astronomical observatories is the time optimization of the facility and the maximization of the scientific return. Scheduling of astronomical observations is an example of the classical task allocation problem known as the job-shop problem (JSP), where N ideal tasks are assigned to M identical resources, while minimizing the total execution time. A problem of higher complexity, called the Flexible-JSP (FJSP), arises when the tasks can be executed by different resources, i.e. by different telescopes, and it focuses on determining a routing policy (i.e., which machine to assign for each operation) other than the traditional scheduling decisions (i.e., to determine the starting time of each operation). In most cases there is no single best approach to solve the planning system and, therefore, various mathematical algorithms (Genetic Algorithms, Ant Colony Optimization algorithms, Multi-Objective Evolutionary algorithms, etc.) are usually considered to adapt the application to the system configuration and task execution constraints. The scheduling time-cycle is also an important ingredient to determine the best approach. A shortterm scheduler, for instance, has to find a good solution with the minimum computation time, providing the system with the capability to adapt the selected task to varying execution constraints (i.e., environment conditions). We present in this contribution an analysis of the task allocation problem and the solutions currently in use at different astronomical facilities. We also describe the schedulers for three different projects (CTA, CARMENES and TJO) where the conclusions of this analysis are applied to develop a suitable routine.
Frutos, M; Méndez, M; Tohmé, F; Broz, D
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
Ozmutlu, H. Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204
A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan
NASA Astrophysics Data System (ADS)
Bhongade, A. S.; Khodke, P. M.
2014-04-01
Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.
Constraint monitoring in TOSCA
NASA Technical Reports Server (NTRS)
Beck, Howard
1992-01-01
The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.
Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
2008-03-01
order fulfillment visibility, Kanban deployment, inventory count can be made visually, machines and tool labeling, costs, preventive maintenance...order fulfillment, computer scheduling versus Kanban , pull versus push systems, flow time efficiencies, back room costs of scheduling, MRP costs
An efficient annealing in Boltzmann machine in Hopfield neural network
NASA Astrophysics Data System (ADS)
Kin, Teoh Yeong; Hasan, Suzanawati Abu; Bulot, Norhisam; Ismail, Mohammad Hafiz
2012-09-01
This paper proposes and implements Boltzmann machine in Hopfield neural network doing logic programming based on the energy minimization system. The temperature scheduling in Boltzmann machine enhancing the performance of doing logic programming in Hopfield neural network. The finest temperature is determined by observing the ratio of global solution and final hamming distance using computer simulations. The study shows that Boltzmann Machine model is more stable and competent in term of representing and solving difficult combinatory problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-06-01
Following a planning period during which the Lawrence Livermore Laboratory and the Department of Defense managing sponsor, the USAF Materials Laboratory, agreed on work statements, the Department of Defense Tri-Service Precision Machine-Tool Program began in February 1978. Milestones scheduled for the first quarter have been met. Tasks and manpower requirements for two basic projects, precision-machining commercialization (PMC) and a machine-tool task force (MTTF), were defined. Progress by PMC includes: (1) documentation of existing precision machine-tool technology by initiation and compilation of a bibliography containing several hundred entries: (2) identification of the problems and needs of precision turning-machine builders and ofmore » precision turning-machine users interested in developing high-precision machining capability; and (3) organization of the schedule and content of the first seminar, to be held in October 1978, which will bring together representatives from the machine-tool and optics communities to address the problems and begin the process of high-precision machining commercialization. Progress by MTTF includes: (1) planning for the organization of a team effort of approximately 60 to 80 international experts to contribute in various ways to project objectives, namely, to summarize state-of-the-art cutting-machine-tool technology and to identify areas where future R and D should prove technically and economically profitable; (2) preparation of a comprehensive plan to achieve those objectives; and (3) preliminary arrangements for a plenary session, also in October, when the task force will meet to formalize the details for implementing the plan.« less
Integration of virtualized worker nodes in standard batch systems
NASA Astrophysics Data System (ADS)
Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen
2010-04-01
Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.
Modeling of a production system using the multi-agent approach
NASA Astrophysics Data System (ADS)
Gwiazda, A.; Sękala, A.; Banaś, W.
2017-08-01
The method that allows for the analysis of complex systems is a multi-agent simulation. The multi-agent simulation (Agent-based modeling and simulation - ABMS) is modeling of complex systems consisting of independent agents. In the case of the model of the production system agents may be manufactured pieces set apart from other types of agents like machine tools, conveyors or replacements stands. Agents are magazines and buffers. More generally speaking, the agents in the model can be single individuals, but you can also be defined as agents of collective entities. They are allowed hierarchical structures. It means that a single agent could belong to a certain class. Depending on the needs of the agent may also be a natural or physical resource. From a technical point of view, the agent is a bundle of data and rules describing its behavior in different situations. Agents can be autonomous or non-autonomous in making the decision about the types of classes of agents, class sizes and types of connections between elements of the system. Multi-agent modeling is a very flexible technique for modeling and model creating in the convention that could be adapted to any research problem analyzed from different points of views. One of the major problems associated with the organization of production is the spatial organization of the production process. Secondly, it is important to include the optimal scheduling. For this purpose use can approach multi-purposeful. In this regard, the model of the production process will refer to the design and scheduling of production space for four different elements. The program system was developed in the environment NetLogo. It was also used elements of artificial intelligence. The main agent represents the manufactured pieces that, according to previously assumed rules, generate the technological route and allow preprint the schedule of that line. Machine lines, reorientation stands, conveyors and transport devices also represent the other type of agent that are utilized in the described simulation. The article presents the idea of an integrated program approach and shows the resulting production layout as a virtual model. This model was developed in the NetLogo multi-agent program environment.
2000-04-01
be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling
Wave scheduling - Decentralized scheduling of task forces in multicomputers
NASA Technical Reports Server (NTRS)
Van Tilborg, A. M.; Wittie, L. D.
1984-01-01
Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.
Analysis of tasks for dynamic man/machine load balancing in advanced helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorgensen, C.C.
1987-10-01
This report considers task allocation requirements imposed by advanced helicopter designs incorporating mixes of human pilots and intelligent machines. Specifically, it develops an analogy between load balancing using distributed non-homogeneous multiprocessors and human team functions. A taxonomy is presented which can be used to identify task combinations likely to cause overload for dynamic scheduling and process allocation mechanisms. Designer criteria are given for function decomposition, separation of control from data, and communication handling for dynamic tasks. Possible effects of n-p complete scheduling problems are noted and a class of combinatorial optimization methods are examined.
NASA Astrophysics Data System (ADS)
Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.
2014-04-01
Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.
Multiplexing Low and High QoS Workloads in Virtual Environments
NASA Astrophysics Data System (ADS)
Verboven, Sam; Vanmechelen, Kurt; Broeckhove, Jan
Virtualization technology has introduced new ways for managing IT infrastructure. The flexible deployment of applications through self-contained virtual machine images has removed the barriers for multiplexing, suspending and migrating applications with their entire execution environment, allowing for a more efficient use of the infrastructure. These developments have given rise to an important challenge regarding the optimal scheduling of virtual machine workloads. In this paper, we specifically address the VM scheduling problem in which workloads that require guaranteed levels of CPU performance are mixed with workloads that do not require such guarantees. We introduce a framework to analyze this scheduling problem and evaluate to what extent such mixed service delivery is beneficial for a provider of virtualized IT infrastructure. Traditionally providers offer IT resources under a guaranteed and fixed performance profile, which can lead to underutilization. The findings of our simulation study show that through proper tuning of a limited set of parameters, the proposed scheduling algorithm allows for a significant increase in utilization without sacrificing on performance dependability.
Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Terry R
2011-01-01
This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
20 CFR 402.165 - Fee schedule.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Fee schedule. 402.165 Section 402.165 Employees' Benefits SOCIAL SECURITY ADMINISTRATION AVAILABILITY OF INFORMATION AND RECORDS TO THE PUBLIC... costs of operating the machine, plus the actual cost of the materials used, plus charges for the time...
Reactive Scheduling in Multipurpose Batch Plants
NASA Astrophysics Data System (ADS)
Narayani, A.; Shaik, Munawar A.
2010-10-01
Scheduling is an important operation in process industries for improving resource utilization resulting in direct economic benefits. It has a two-fold objective of fulfilling customer orders within the specified time as well as maximizing the plant profit. Unexpected disturbances such as machine breakdown, arrival of rush orders and cancellation of orders affect the schedule of the plant. Reactive scheduling is generation of a new schedule which has minimum deviation from the original schedule in spite of the occurrence of unexpected events in the plant operation. Recently, Shaik & Floudas (2009) proposed a novel unified model for short-term scheduling of multipurpose batch plants using unit-specific event-based continuous time representation. In this paper, we extend the model of Shaik & Floudas (2009) to handle reactive scheduling.
1984-06-29
sheet metal, machined and composite parts and assembling the components into final pruJucts o Planning, evaluating, testing, inspecting and...Research showed that current programs were pursuing the design and demonstration of integrated centers for sheet metal, machining and composite ...determine any metal parts required and to schedule these requirements from the machining center. Figure 3-33, Planned Composite Production, shows
Reasoning about real-time systems with temporal interval logic constraints on multi-state automata
NASA Technical Reports Server (NTRS)
Gabrielian, Armen
1991-01-01
Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 41 Public Contracts and Property Management 2 2011-07-01 2007-07-01 true Requisitioning tabulating... Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT... electrical and mechanical contact tabulating machines, including aperture cards and copy cards. Federal...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Requisitioning tabulating... Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT... electrical and mechanical contact tabulating machines, including aperture cards and copy cards. Federal...
NASA Astrophysics Data System (ADS)
Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia
2018-04-01
In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.
Run-time scheduling and execution of loops on message passing machines
NASA Technical Reports Server (NTRS)
Crowley, Kay; Saltz, Joel; Mirchandaney, Ravi; Berryman, Harry
1989-01-01
Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.
Run-time scheduling and execution of loops on message passing machines
NASA Technical Reports Server (NTRS)
Saltz, Joel; Crowley, Kathleen; Mirchandaney, Ravi; Berryman, Harry
1990-01-01
Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.
NASA Astrophysics Data System (ADS)
Sembiring, N.; Ginting, E.; Darnello, T.
2017-12-01
Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.
A new scheduling algorithm for parallel sparse LU factorization with static pivoting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigori, Laura; Li, Xiaoye S.
2002-08-20
In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.
Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Terry R
2012-01-01
This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
Automated Planning and Scheduling for Space Mission Operations
NASA Technical Reports Server (NTRS)
Chien, Steve; Jonsson, Ari; Knight, Russell
2005-01-01
Research Trends: a) Finite-capacity scheduling under more complex constraints and increased problem dimensionality (subcontracting, overtime, lot splitting, inventory, etc.) b) Integrated planning and scheduling. c) Mixed-initiative frameworks. d) Management of uncertainty (proactive and reactive). e) Autonomous agent architectures and distributed production management. e) Integration of machine learning capabilities. f) Wider scope of applications: 1) analysis of supplier/buyer protocols & tradeoffs; 2) integration of strategic & tactical decision-making; and 3) enterprise integration.
Outsourcing and scheduling for a two-machine flow shop with release times
NASA Astrophysics Data System (ADS)
Ahmadizar, Fardin; Amiri, Zeinab
2018-03-01
This article addresses a two-machine flow shop scheduling problem where jobs are released intermittently and outsourcing is allowed. The first operations of outsourced jobs are processed by the first subcontractor, they are transported in batches to the second subcontractor for processing their second operations, and finally they are transported back to the manufacturer. The objective is to select a subset of jobs to be outsourced, to schedule both the in-house and the outsourced jobs, and to determine a transportation plan for the outsourced jobs so as to minimize the sum of the makespan and the outsourcing and transportation costs. Two mathematical models of the problem and several necessary optimality conditions are presented. A solution approach is then proposed by incorporating the dominance properties with an ant colony algorithm. Finally, computational experiments are conducted to evaluate the performance of the models and solution approach.
Navy Acquisition: Cost, Schedule, and Performance of New Submarine Combat Systems
1990-01-01
1985). Page 8 GAO/NSIAD-90-72 Submarine Combat Systems Chapter 1 Introduction In December 1983 the Navy awarded the International Business Machines...contracts to the General Electric Com- pany and the International Business Machines. In December 1987 the Navy selected General Electric as the prime...contractor and International Business Machines as the "follower" contractor. On March 31, 1988. the Navy awarded General Electric a $1.84 billion fixed
Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time
NASA Astrophysics Data System (ADS)
Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.
2018-03-01
A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2002-12-19
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
SLURM: Simplex Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2003-04-22
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
49 CFR 214.531 - Schedule of repairs; general.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Hi-Rail Vehicles § 214.531 Schedule of repairs; general. Except as provided in §§ 214.527(c)(5), 214.529, and 214.533, an on-track roadway maintenance machine or hi-rail vehicle that does not meet all... or hi-rail vehicle shall be placed out of on-track service. ...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Conditioning/Heat Pump Equipment Domestic and commercial air conditioning and refrigeration equipment fall... cooling/heat cycle. 8415.82.00 Other, incorporating a refrigerating unit— Self-contained machines and... refrigerating or freezing equipment, electric or other; heat pumps, other than air conditioning machines of...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Conditioning/Heat Pump Equipment Domestic and commercial air conditioning and refrigeration equipment fall... cooling/heat cycle. 8415.82.00 Other, incorporating a refrigerating unit— Self-contained machines and... refrigerating or freezing equipment, electric or other; heat pumps, other than air conditioning machines of...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Conditioning/Heat Pump Equipment Domestic and commercial air conditioning and refrigeration equipment fall... cooling/heat cycle. 8415.82.00 Other, incorporating a refrigerating unit— Self-contained machines and... refrigerating or freezing equipment, electric or other; heat pumps, other than air conditioning machines of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Conditioning/Heat Pump Equipment Domestic and commercial air conditioning and refrigeration equipment fall... cooling/heat cycle. 8415.82.00 Other, incorporating a refrigerating unit— Self-contained machines and... refrigerating or freezing equipment, electric or other; heat pumps, other than air conditioning machines of...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Conditioning/Heat Pump Equipment Domestic and commercial air conditioning and refrigeration equipment fall... cooling/heat cycle. 8415.82.00 Other, incorporating a refrigerating unit— Self-contained machines and... refrigerating or freezing equipment, electric or other; heat pumps, other than air conditioning machines of...
Design for multipurpose use: an application of DfE concept in a developing economy
NASA Astrophysics Data System (ADS)
Dunmade, Israel
2004-12-01
Design for Environment (DfE) has been defined as the systematic integration of environmental considerations into product and process design. And it has been discovered that material and space can be saved when several functions are integrated into a single product by taking advantage of common components. In this design and development project, a multipurpose thresher was designed based on an integrated concept of design for modularity, disassembly, demanufacturing and remanufacturing. The machine can be used to thresh various types of farm produce such as rice, sorghum, cowpea and rye by changing the concave and the cylinder (threshing drum). The configuration of the machine enables access to most of the component parts without changing the tools needed for disassembly because the same type of fasteners was used. Furthermore, the functional units (the shelling unit, the separation unit and the grading unit) were assembled into modules such that only the faulty part needs to be replaced if necessary. The design was so simplified that the operator can make the changes for different uses without any difficulty. The machine has been successfully tested with a number of these products and it is scheduled for tests with other produce like corn and peanuts. The modularity of the functional unit will facilitate multi-lifecycle use of machine and/or its component parts. The uniformity of the liaisons and simplification of the configuration will reduce both the disassembly times and maintenance cost. By this integration, the material requirements for four different machines are conserved, environmental emissions that would be associated with the manufacture, transportation and disposal of four machines are eliminated while the capital requirements by farmers for machinery are reduced to about a quarter. Consequently the total lifecycle cost is kept minimum while the eco-efficiency is maximized.
Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model
NASA Astrophysics Data System (ADS)
Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled
2018-03-01
The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.
Design of a Modular E-Core Flux Concentrating Axial Flux Machine: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husain, Tausif; Sozer, Yilmaz; Husain, Iqbal
2015-08-24
In this paper a novel E-Core axial flux machine is proposed. The machine has a double-stator, single-rotor configuration with flux-concentrating ferrite magnets and pole windings across each leg of an E-Core stator. E-Core stators with the proposed flux-concentrating rotor arrangement result in better magnet utilization and higher torque density. The machine also has a modular structure facilitating simpler construction. This paper presents a single-phase and a three-phase version of the E-Core machine. Case studies for a 1.1-kW, 400-rpm machine for both the single-phase and three-phase axial flux machines are presented. The results are verified through 3D finite element analysis. facilitatingmore » simpler construction. This paper presents a single-phase and a three-phase version of the E-Core machine. Case studies for a 1.1-kW, 400-rpm machine for both the single-phase and three-phase axial flux machines are presented. The results are verified through 3D finite element analysis.« less
The MICRO-BOSS scheduling system: Current status and future efforts
NASA Technical Reports Server (NTRS)
Sadeh, Norman M.
1992-01-01
In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory. Current research efforts include: adaptation of MICRO-BOSS to deal with sequence-dependent setups and development of micro-opportunistic reactive scheduling techniques that will enable the system to patch the schedule in the presence of contingencies such as machine breakdowns, raw materials arriving late, job cancellations, etc.
Machine Learning Based Online Performance Prediction for Runtime Parallelization and Task Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J; Ma, X; Singh, K
2008-10-09
With the emerging many-core paradigm, parallel programming must extend beyond its traditional realm of scientific applications. Converting existing sequential applications as well as developing next-generation software requires assistance from hardware, compilers and runtime systems to exploit parallelism transparently within applications. These systems must decompose applications into tasks that can be executed in parallel and then schedule those tasks to minimize load imbalance. However, many systems lack a priori knowledge about the execution time of all tasks to perform effective load balancing with low scheduling overhead. In this paper, we approach this fundamental problem using machine learning techniques first to generatemore » performance models for all tasks and then applying those models to perform automatic performance prediction across program executions. We also extend an existing scheduling algorithm to use generated task cost estimates for online task partitioning and scheduling. We implement the above techniques in the pR framework, which transparently parallelizes scripts in the popular R language, and evaluate their performance and overhead with both a real-world application and a large number of synthetic representative test scripts. Our experimental results show that our proposed approach significantly improves task partitioning and scheduling, with maximum improvements of 21.8%, 40.3% and 22.1% and average improvements of 15.9%, 16.9% and 4.2% for LMM (a real R application) and synthetic test cases with independent and dependent tasks, respectively.« less
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-07-08
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.
Linux OS Jitter Measurements at Large Node Counts using a BlueGene/L
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Terry R; Tauferner, Mr. Andrew; Inglett, Mr. Todd
2010-01-01
We present experimental results for a coordinated scheduling implementation of the Linux operating system. Results were collected on an IBM Blue Gene/L machine at scales up to 16K nodes. Our results indicate coordinated scheduling was able to provide a dramatic improvement in scaling performance for two applications characterized as bulk synchronous parallel programs.
Real-time Scheduling for GPUS with Applications in Advanced Automotive Systems
2015-01-01
129 3.7 Architecture of GPU tasklet scheduling infrastructure ...throughput. This disparity is even greater when we consider mobile CPUs, such as those designed by ARM. For instance, the ARM Cortex-A15 series processor as...stub library that replaces the GPGPU runtime within each virtual machine. The stub library communicates API calls to a GPGPU backend user-space daemon
Autonomous planning and scheduling on the TechSat 21 mission
NASA Technical Reports Server (NTRS)
Sherwood, R.; Chien, S.; Castano, R.; Rabideau, G.
2002-01-01
The Autonomous Sciencecraft Experiment (ASE) will fly onboard the Air Force TechSat 21 constellation of three spacecraft scheduled for launch in 2006. ASE uses onboard continuous planning, robust task and goal-based execution, model-based mode identification and reconfiguration, and onboard machine learning and pattern recognition to radically increase science return by enabling intelligent downlink selection and autonomous retargeting.
Longest jobs first algorithm in solving job shop scheduling using adaptive genetic algorithm (GA)
NASA Astrophysics Data System (ADS)
Alizadeh Sahzabi, Vahid; Karimi, Iman; Alizadeh Sahzabi, Navid; Mamaani Barnaghi, Peiman
2012-01-01
In this paper, genetic algorithm was used to solve job shop scheduling problems. One example discussed in JSSP (Job Shop Scheduling Problem) and I described how we can solve such these problems by genetic algorithm. The goal in JSSP is to gain the shortest process time. Furthermore I proposed a method to obtain best performance on performing all jobs in shortest time. The method mainly, is according to Genetic algorithm (GA) and crossing over between parents always follows the rule which the longest process is at the first in the job queue. In the other word chromosomes is suggested to sorts based on the longest processes to shortest i.e. "longest job first" says firstly look which machine contains most processing time during its performing all its jobs and that is the bottleneck. Secondly, start sort those jobs which are belonging to that specific machine descending. Based on the achieved results," longest jobs first" is the optimized status in job shop scheduling problems. In our results the accuracy would grow up to 94.7% for total processing time and the method improved 4% the accuracy of performing all jobs in the presented example.
Scheduling algorithms for automatic control systems for technological processes
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.
2017-01-01
Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.
NASA Astrophysics Data System (ADS)
Koten, V. K.; Tanamal, C. E.
2017-03-01
Manufacturing agricultural products by the farmers, people or person who involve in medium industry, small industry, and households industry still be done in separately. Although the power on primemover is enough, in operations, primemover was only to move one of several agricultural products machine. This study attempts to design and construct power transmition multi output with single primemover; a single construction that allows primemover move some agricultur products machine in the same or not. This study begins with the determination of production capacity and the power to destroy products, the determination of resources and rotation, normalization of resources and rotation, the determination of the type material used, the size determination of each machine elements, construction machine elements, and assemble machine elements into a construction multi output power transmition with single primemover on agricultural products machine. The results show that with a input normalization 4 PK (2984 Watt), rotation 2000 rpm, the strength of material 60 kg/mm2, and several operating consideration, thus obtained size of machine elements through calculation. Based on the size, the machine elements is made through the use of some machine tools and assembled to form a multi output power transmition with single primemover.
Job Shop Scheduling Focusing on Role of Buffer
NASA Astrophysics Data System (ADS)
Hino, Rei; Kusumi, Tetsuya; Yoo, Jae-Kyu; Shimizu, Yoshiaki
A scheduling problem is formulated in order to consistently manage each manufacturing resource, including machine tools, assembly robots, AGV, storehouses, material shelves, and so on. The manufacturing resources are classified into three types: producer, location, and mover. This paper focuses especially on the role of the buffer, and the differences among these types are analyzed. A unified scheduling formulation is derived from the analytical results based on the resource’s roles. Scheduling procedures based on dispatching rules are also proposed in order to numerically evaluate job shop-type production having finite buffer capacity. The influences of the capacity of bottle-necked production devices and the buffer on productivity are discussed.
Application of a hybrid generation/utility assessment heuristic to a class of scheduling problems
NASA Technical Reports Server (NTRS)
Heyward, Ann O.
1989-01-01
A two-stage heuristic solution approach for a class of multiobjective, n-job, 1-machine scheduling problems is described. Minimization of job-to-job interference for n jobs is sought. The first stage generates alternative schedule sequences by interchanging pairs of schedule elements. The set of alternative sequences can represent nodes of a decision tree; each node is reached via decision to interchange job elements. The second stage selects the parent node for the next generation of alternative sequences through automated paired comparison of objective performance for all current nodes. An application of the heuristic approach to communications satellite systems planning is presented.
Pavor nocturnus: a complication of single daily tricyclic or neuroleptic dosage.
Flemenbaum, A
1976-05-01
The author tested the hypothesis that a single bedtime dosage schedule of tricyclic or neuroleptic medication produces increased frequency of night terrors by administering a questionnaire to 30 medical patients who were not receiving such medications and 100 psychiatric patients on either multiple- or single-dosage schedules. Psychiatric patients on multiple-dosage schedules reported no more frightening dreams than the medical patients, whereas almost three-fourths of those receiving single bedtime doses had frightening dreams, a significant difference from the medical sample. This preliminary report is presented to call attention to the possible undesirable effects of a single dose schedule.
A user interface for a knowledge-based planning and scheduling system
NASA Technical Reports Server (NTRS)
Mulvehill, Alice M.
1988-01-01
The objective of EMPRESS (Expert Mission Planning and Replanning Scheduling System) is to support the planning and scheduling required to prepare science and application payloads for flight aboard the US Space Shuttle. EMPRESS was designed and implemented in Zetalisp on a 3600 series Symbolics Lisp machine. Initially, EMPRESS was built as a concept demonstration system. The system has since been modified and expanded to ensure that the data have integrity. Issues underlying the design and development of the EMPRESS-I interface, results from a system usability assessment, and consequent modifications are described.
The effect of embedded bonus rounds on slot machine preference.
Belisle, Jordan; Owens, Kelti; Dixon, Mark R; Malkin, Albert; Jordan, Sam D
2017-04-01
Twenty-three university students completed a simulated slot machine task involving the concurrent presentation of two slot machines that were varied both in win density and the inclusion of a bonus round feature to evaluate the effect of embedded bonus rounds on participant response allocation. The results suggest that participants allocated a greater percentage of responses to machines with embedded bonus rounds across both dense (Bonus: M = 68.4, SD = 19.2; No Bonus: M = 51.2; 9.6) and lean (Bonus: M = 48.8, SD = 9.6; No Bonus: M = 31.6, SD = 19.2) reinforcement schedules, in which the overall reinforcement rate across all machines was held constant. © 2016 Society for the Experimental Analysis of Behavior.
Software For Integer Programming
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1992-01-01
Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166
NASA Technical Reports Server (NTRS)
Borse, John E.; Owens, Christopher C.
1992-01-01
Our research focuses on the problem of recovering from perturbations in large-scale schedules, specifically on the ability of a human-machine partnership to dynamically modify an airline schedule in response to unanticipated disruptions. This task is characterized by massive interdependencies and a large space of possible actions. Our approach is to apply the following: qualitative, knowledge-intensive techniques relying on a memory of stereotypical failures and appropriate recoveries; and quantitative techniques drawn from the Operations Research community's work on scheduling. Our main scientific challenge is to represent schedules, failures, and repairs so as to make both sets of techniques applicable to the same data. This paper outlines ongoing research in which we are cooperating with United Airlines to develop our understanding of the scientific issues underlying the practicalities of dynamic, real-time schedule repair.
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.
Production scheduling with discrete and renewable additional resources
NASA Astrophysics Data System (ADS)
Kalinowski, K.; Grabowik, C.; Paprocka, I.; Kempa, W.
2015-11-01
In this paper an approach to planning of additional resources when scheduling operations are discussed. The considered resources are assumed to be discrete and renewable. In most research in scheduling domain, the basic and often the only type of regarded resources is a workstation. It can be understood as a machine, a device or even as a separated space on the shop floor. In many cases, during the detailed scheduling of operations the need of using more than one resource, required for its implementation, can be indicated. Resource requirements for an operation may relate to different resources or resources of the same type. Additional resources are most often referred to these human resources, tools or equipment, for which the limited availability in the manufacturing system may have an influence on the execution dates of some operations. In the paper the concept of the division into basic and additional resources and their planning method was shown. A situation in which sets of basic and additional resources are not separable - the same additional resource may be a basic resource for another operation is also considered. Scheduling of operations, including greater amount of resources can cause many difficulties, depending on whether the resource is involved in the entire time of operation, only in the selected part(s) of operation (e.g. as auxiliary staff at setup time) or cyclic - e.g. when an operator supports more than one machine, or supervises the execution of several operations. For this reason the dates and work times of resources participation in the operation can be different. Presented issues are crucial when modelling of production scheduling environment and designing of structures for the purpose of scheduling software development.
Single molecule detection, thermal fluctuation and life
YANAGIDA, Toshio; ISHII, Yoshiharu
2017-01-01
Single molecule detection has contributed to our understanding of the unique mechanisms of life. Unlike artificial man-made machines, biological molecular machines integrate thermal noises rather than avoid them. For example, single molecule detection has demonstrated that myosin motors undergo biased Brownian motion for stepwise movement and that single protein molecules spontaneously change their conformation, for switching to interactions with other proteins, in response to thermal fluctuation. Thus, molecular machines have flexibility and efficiency not seen in artificial machines. PMID:28190869
ERIC Educational Resources Information Center
GLOVER, J.H.
THE CHIEF OBJECTIVE OF THIS STUDY OF SPEED-SKILL ACQUISITION WAS TO FIND A MATHEMATICAL MODEL CAPABLE OF SIMPLE GRAPHIC INTERPRETATION FOR INDUSTRIAL TRAINING AND PRODUCTION SCHEDULING AT THE SHOP FLOOR LEVEL. STUDIES OF MIDDLE SKILL DEVELOPMENT IN MACHINE AND VEHICLE ASSEMBLY, AIRCRAFT PRODUCTION, SPOOLMAKING AND THE MACHINING OF PARTS CONFIRMED…
New Single Piece Blast Hardware design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulrich, Andri; Steinzig, Michael Louis; Aragon, Daniel Adrian
W, Q and PF engineers and machinists designed and fabricated, on the new Mazak i300, the first Single Piece Blast Hardware (unclassified design shown) reducing fabrication and inspection time by over 50%. The first DU Single Piece is completed and will be used for Hydro Test 3680. Past hydro tests used a twopiece assembly due to a lack of equipment capable of machining the complex saddle shape in a single piece. The i300 provides turning and milling 5-axis machining on one machine. The milling head on the i300 can machine past 90 relative to the spindle axis. This makes itmore » possible to machine the complex saddle surface on a single piece. Going to a single piece eliminates tolerance problems, such as tilting and eccentricity, that typically occurred when assembling the two pieces together« less
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.
Design of a Modular E-Core Flux Concentrating Axial Flux Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husain, Tausif; Sozer, Yilmaz; Husain, Iqbal
2015-09-02
In this paper a novel E-Core axial flux machine is proposed. The machine has a double stator-single rotor configuration with flux concentrating ferrite magnets, and pole windings across each leg of an E-Core stator. E-Core stators with the proposed flux-concentrating rotor arrangement result in better magnet utilization and higher torque density. The machine also has a modular structure facilitating simpler construction. This paper presents a single phase and a three-phase version of the E-Core machine. Case study for a 1.1 kW, 400 rpm machine for both the single phase and three-phase axial flux machine is presented. The results are verifiedmore » through 3D finite element analysis.« less
SU-F-T-226: QA Management for a Large Institution with Multiple Campuses for FMEA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, G; Chan, M; Lovelock, D
2016-06-15
Purpose: To redesign our radiation therapy QA program with the goal to improve quality, efficiency, and consistency among a growing number of campuses at a large institution. Methods: A QA committee was established with at least one physicist representing each of our six campuses (22 linacs). Weekly meetings were scheduled to advise on and update current procedures, to review end-to-end and other test results, and to prepare composite reports for internal and external audits. QA procedures for treatment and imaging equipment were derived from TG Reports 142 and 66, practice guidelines, and feedback from ACR evaluations. The committee focused onmore » reaching a consensus on a single QA program among all campuses using the same type of equipment and reference data. Since the recommendations for tolerances referenced to baseline data were subject to interpretation in some instances, the committee reviewed the characteristics of all machines and quantified any variations before choosing between treatment planning system (i.e. treatment planning system commissioning data that is representative for all machines) or machine-specific values (i.e. commissioning data of the individual machines) as baseline data. Results: The configured QA program will be followed strictly by all campuses. Inventory of available equipment has been compiled, and additional equipment acquisitions for the QA program are made as needed. Dosimetric characteristics are evaluated for all machines using the same methods to ensure consistency of beam data where possible. In most cases, baseline data refer to treatment planning system commissioning data but machine-specific values are used as reference where it is deemed appropriate. Conclusion: With a uniform QA scheme, variations in QA procedures are kept to a minimum. With a centralized database, data collection and analysis are simplified. This program will facilitate uniformity in patient treatments and analysis of large amounts of QA data campus-wide, which will ultimately facilitate FMEA.« less
ERIC Educational Resources Information Center
Kennedy, Mike
2003-01-01
Describes how facilities-management systems use technology to help schools and universities operate their buildings more efficiently, reduce energy consumption, manage inventory more accurately, keep track of supplies and maintenance schedules, and save money. (EV)
LHC Status and Upgrade Challenges
NASA Astrophysics Data System (ADS)
Smith, Jeffrey
2009-11-01
The Large Hadron Collider has had a trying start-up and a challenging operational future lays ahead. Critical to the machine's performance is controlling a beam of particles whose stored energy is equivalent to 80 kg of TNT. Unavoidable beam losses result in energy deposition throughout the machine and without adequate protection this power would result in quenching of the superconducting magnets. A brief overview of the machine layout and principles of operation will be reviewed including a summary of the September 2008 accident. The current status of the LHC, startup schedule and upgrade options to achieve the target luminosity will be presented.
Sensibility study in a flexible job shop scheduling problem
NASA Astrophysics Data System (ADS)
Curralo, Ana; Pereira, Ana I.; Barbosa, José; Leitão, Paulo
2013-10-01
This paper proposes the impact assessment of the jobs order in the optimal time of operations in a Flexible Job Shop Scheduling Problem. In this work a real assembly cell was studied: the AIP-PRIMECA cell at the Université de Valenciennes et du Hainaut-Cambrésis, in France, which is considered as a Flexible Job Shop problem. The problem consists in finding the machines operations schedule, taking into account the precedence constraints. The main objective is to minimize the batch makespan, i.e. the finish time of the last operation completed in the schedule. Shortly, the present study consists in evaluating if the jobs order affects the optimal time of the operations schedule. The genetic algorithm was used to solve the optimization problem. As a conclusion, it's assessed that the jobs order influence the optimal time.
Huang, Jen-Ching; Weng, Yung-Jin
2014-01-01
This study focused on the nanomachining property and cutting model of single-crystal sapphire during nanomachining. The coated diamond probe is used to as a tool, and the atomic force microscopy (AFM) is as an experimental platform for nanomachining. To understand the effect of normal force on single-crystal sapphire machining, this study tested nano-line machining and nano-rectangular pattern machining at different normal force. In nano-line machining test, the experimental results showed that the normal force increased, the groove depth from nano-line machining also increased. And the trend is logarithmic type. In nano-rectangular pattern machining test, it is found when the normal force increases, the groove depth also increased, but rather the accumulation of small chips. This paper combined the blew by air blower, the cleaning by ultrasonic cleaning machine and using contact mode probe to scan the surface topology after nanomaching, and proposed the "criterion of nanomachining cutting model," in order to determine the cutting model of single-crystal sapphire in the nanomachining is ductile regime cutting model or brittle regime cutting model. After analysis, the single-crystal sapphire substrate is processed in small normal force during nano-linear machining; its cutting modes are ductile regime cutting model. In the nano-rectangular pattern machining, due to the impact of machined zones overlap, the cutting mode is converted into a brittle regime cutting model. © 2014 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Jaap, John; Muery, Kim
2000-01-01
Scheduling engines are found at the core of software systems that plan and schedule activities and resources. A Request-Oriented Scheduling Engine (ROSE) is one that processes a single request (adding a task to a timeline) and then waits for another request. For the International Space Station, a robust ROSE-based system would support multiple, simultaneous users, each formulating requests (defining scheduling requirements), submitting these requests via the internet to a single scheduling engine operating on a single timeline, and immediately viewing the resulting timeline. ROSE is significantly different from the engine currently used to schedule Space Station operations. The current engine supports essentially one person at a time, with a pre-defined set of requirements from many payloads, working in either a "batch" scheduling mode or an interactive/manual scheduling mode. A planning and scheduling process that takes advantage of the features of ROSE could produce greater customer satisfaction at reduced cost and reduced flow time. This paper describes a possible ROSE-based scheduling process and identifies the additional software component required to support it. Resulting changes to the management and control of the process are also discussed.
A meta-heuristic method for solving scheduling problem: crow search algorithm
NASA Astrophysics Data System (ADS)
Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi
2018-04-01
Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.
Spike: Artificial intelligence scheduling for Hubble space telescope
NASA Technical Reports Server (NTRS)
Johnston, Mark; Miller, Glenn; Sponsler, Jeff; Vick, Shon; Jackson, Robert
1990-01-01
Efficient utilization of spacecraft resources is essential, but the accompanying scheduling problems are often computationally intractable and are difficult to approximate because of the presence of numerous interacting constraints. Artificial intelligence techniques were applied to the scheduling of the NASA/ESA Hubble Space Telescope (HST). This presents a particularly challenging problem since a yearlong observing program can contain some tens of thousands of exposures which are subject to a large number of scientific, operational, spacecraft, and environmental constraints. New techniques were developed for machine reasoning about scheduling constraints and goals, especially in cases where uncertainty is an important scheduling consideration and where resolving conflicts among conflicting preferences is essential. These technique were utilized in a set of workstation based scheduling tools (Spike) for HST. Graphical displays of activities, constraints, and schedules are an important feature of the system. High level scheduling strategies using both rule based and neural network approaches were developed. While the specific constraints implemented are those most relevant to HST, the framework developed is far more general and could easily handle other kinds of scheduling problems. The concept and implementation of the Spike system are described along with some experiments in adapting Spike to other spacecraft scheduling domains.
NASA Astrophysics Data System (ADS)
Sui, Yi; Zheng, Ping; Cheng, Luming; Wang, Weinan; Liu, Jiaqi
2017-05-01
A single-phase axially-magnetized permanent-magnet (PM) oscillating machine which can be integrated with a free-piston Stirling engine to generate electric power, is investigated for miniature aerospace power sources. Machine structure, operating principle and detent force characteristic are elaborately studied. With the sinusoidal speed characteristic of the mover considered, the proposed machine is designed by 2D finite-element analysis (FEA), and some main structural parameters such as air gap diameter, dimensions of PMs, pole pitches of both stator and mover, and the pole-pitch combinations, etc., are optimized to improve both the power density and force capability. Compared with the three-phase PM linear machines, the proposed single-phase machine features less PM use, simple control and low controller cost. The power density of the proposed machine is higher than that of the three-phase radially-magnetized PM linear machine, but lower than the three-phase axially-magnetized PM linear machine.
Zhou, Yongxia; Yu, Fang; Duong, Timothy
2014-01-01
This study employed graph theory and machine learning analysis of multiparametric MRI data to improve characterization and prediction in autism spectrum disorders (ASD). Data from 127 children with ASD (13.5±6.0 years) and 153 age- and gender-matched typically developing children (14.5±5.7 years) were selected from the multi-center Functional Connectome Project. Regional gray matter volume and cortical thickness increased, whereas white matter volume decreased in ASD compared to controls. Small-world network analysis of quantitative MRI data demonstrated decreased global efficiency based on gray matter cortical thickness but not with functional connectivity MRI (fcMRI) or volumetry. An integrative model of 22 quantitative imaging features was used for classification and prediction of phenotypic features that included the autism diagnostic observation schedule, the revised autism diagnostic interview, and intelligence quotient scores. Among the 22 imaging features, four (caudate volume, caudate-cortical functional connectivity and inferior frontal gyrus functional connectivity) were found to be highly informative, markedly improving classification and prediction accuracy when compared with the single imaging features. This approach could potentially serve as a biomarker in prognosis, diagnosis, and monitoring disease progression.
Tuning Parameters in Heuristics by Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
Ebrahimi, Ahmad; Kia, Reza; Komijan, Alireza Rashidi
2016-01-01
In this article, a novel integrated mixed-integer nonlinear programming model is presented for designing a cellular manufacturing system (CMS) considering machine layout and part scheduling problems simultaneously as interrelated decisions. The integrated CMS model is formulated to incorporate several design features including part due date, material handling time, operation sequence, processing time, an intra-cell layout of unequal-area facilities, and part scheduling. The objective function is to minimize makespan, tardiness penalties, and material handling costs of inter-cell and intra-cell movements. Two numerical examples are solved by the Lingo software to illustrate the results obtained by the incorporated features. In order to assess the effects and importance of integration of machine layout and part scheduling in designing a CMS, two approaches, sequentially and concurrent are investigated and the improvement resulted from a concurrent approach is revealed. Also, due to the NP-hardness of the integrated model, an efficient genetic algorithm is designed. As a consequence, computational results of this study indicate that the best solutions found by GA are better than the solutions found by B&B in much less time for both sequential and concurrent approaches. Moreover, the comparisons between the objective function values (OFVs) obtained by sequential and concurrent approaches demonstrate that the OFV improvement is averagely around 17 % by GA and 14 % by B&B.
Improved and Cost Effective Machining Techniques for Tracked Combat Vehicle Parts
1983-10-01
steel is shown in Figure 7-7 and consists of tempered marten- site. Three of the alloys which are used in the gas turbine engine are cast 17 - 4PH ...stainless steel, Inconel 718 and Inconel 713. The 17 - 4PH stainless steel was machined in the solution treated and aged condition. The microstructure as shown...SECURITY CLASS. (of thia report) ISa. DECLASSIFICATION/DOWNGRADING SCHEDULE 16. DISTRIBUTION STATEMENT (of thie Report) 17 . DISTRIBUTION STATEMENT (of
Scheduling a Medium-Sized Manufacturing Shop: A Simulation Study
1993-09-01
distinction, elements of work order data include: the minimum machine type required for a work order, as well as the prgramming , set-up, and machining... prevent this from happening. Such a mechanism could take the form of a reprioritization function that is executed after a specified period of time...system for a very long time unless some mechanism is used to prevent this from happening. The jobs left in the system will be the ones that have very
Resource Management in Constrained Dynamic Situations
NASA Astrophysics Data System (ADS)
Seok, Jinwoo
Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments for the planning level. To obtain a policy, dynamic programing is applied, and to obtain a solution, limited breadth-first search is applied to the recomposable restricted finite state machine. A multi-function phased array radar resource management problem and an unmanned aerial vehicle patrolling problem are treated using recomposable restricted finite state machines. Then, we use model predictive control for the control level, because it allows constraint handling and setpoint tracking for the schedule. An aircraft power system management problem is treated that aims to develop an integrated control system for an aircraft gas turbine engine and electrical power system using rate-based model predictive control. Our results indicate that at the planning level, limited breadth-first search for recomposable restricted finite state machines generates good scheduling solutions in limited resource situations and unpredictably dynamic environments. The importance of cooperation in the planning level is also verified. At the control level, a rate-based model predictive controller allows good schedule tracking and safe operations. The importance of considering the system constraints and interactions between the subsystems is indicated. For the best resource management in constrained dynamic situations, the planning level and the control level need to be considered together.
NASA Astrophysics Data System (ADS)
Budi Harja, Herman; Prakosa, Tri; Raharno, Sri; Yuwana Martawirya, Yatna; Nurhadi, Indra; Setyo Nogroho, Alamsyah
2018-03-01
The production characteristic of job-shop industry at which products have wide variety but small amounts causes every machine tool will be shared to conduct production process with dynamic load. Its dynamic condition operation directly affects machine tools component reliability. Hence, determination of maintenance schedule for every component should be calculated based on actual usage of machine tools component. This paper describes study on development of monitoring system to obtaining information about each CNC machine tool component usage in real time approached by component grouping based on its operation phase. A special device has been developed for monitoring machine tool component usage by utilizing usage phase activity data taken from certain electronics components within CNC machine. The components are adaptor, servo driver and spindle driver, as well as some additional components such as microcontroller and relays. The obtained data are utilized for detecting machine utilization phases such as power on state, machine ready state or spindle running state. Experimental result have shown that the developed CNC machine tool monitoring system is capable of obtaining phase information of machine tool usage as well as its duration and displays the information at the user interface application.
Dynamically allocated virtual clustering management system
NASA Astrophysics Data System (ADS)
Marcus, Kelvin; Cannata, Jess
2013-05-01
The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.
Multiresource allocation and scheduling for periodic soft real-time applications
NASA Astrophysics Data System (ADS)
Gopalan, Kartik; Chiueh, Tzi-cker
2001-12-01
Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.
Open shop scheduling problem to minimize total weighted completion time
NASA Astrophysics Data System (ADS)
Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian
2017-01-01
A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.
ERIC Educational Resources Information Center
Instructor, 1983
1983-01-01
Instructor's Computer-Using Teachers Board members give practical tips on how to get a classroom ready for a new computer, introduce students to the machine, and help them learn about programing and computer literacy. Safety, scheduling, and supervision requirements are noted. (PP)
Minimization of Delay Costs in the Realization of Production Orders in Two-Machine System
NASA Astrophysics Data System (ADS)
Dylewski, Robert; Jardzioch, Andrzej; Dworak, Oliver
2018-03-01
The article presents a new algorithm that enables the allocation of the optimal scheduling of the production orders in the two-machine system based on the minimum cost of order delays. The formulated algorithm uses the method of branch and bounds and it is a particular generalisation of the algorithm enabling for the determination of the sequence of the production orders with the minimal sum of the delays. In order to illustrate the proposed algorithm in the best way, the article contains examples accompanied by the graphical trees of solutions. The research analysing the utility of the said algorithm was conducted. The achieved results proved the usefulness of the proposed algorithm when applied to scheduling of orders. The formulated algorithm was implemented in the Matlab programme. In addition, the studies for different sets of production orders were conducted.
Parallel-Batch Scheduling and Transportation Coordination with Waiting Time Constraint
Gong, Hua; Chen, Daheng; Xu, Ke
2014-01-01
This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order. PMID:24883385
Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2016-01-01
Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.
Kim, Jongin; Park, Hyeong-jun
2016-01-01
The purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a feature selection algorithm based on the sparse regression model. These features were classified using a support vector machine with a radial basis function kernel, an extreme learning machine, and two variants of an extreme learning machine with different kernels. Because each single trial consisted of thirty segments, our algorithm decided the label of the single trial by selecting the most frequent output among the outputs of the thirty segments. As a result, we observed that the extreme learning machine and its variants achieved better classification rates than the support vector machine with a radial basis function kernel and linear discrimination analysis. Thus, our results suggested that EEG responses to imagined speech could be successfully classified in a single trial using an extreme learning machine with a radial basis function and linear kernel. This study with classification of imagined speech might contribute to the development of silent speech BCI systems. PMID:28097128
Process Development and Micro-Machining of MARBLE Foam-Cored Rexolite Hemi-Shell Ablator Capsules
Randolph, Randall Blaine; Oertel, John A.; Schmidt, Derek William; ...
2016-06-30
For this study, machined CH hemi-shell ablator capsules have been successfully produced by the MST-7 Target Fabrication Team at Los Alamos National Laboratory. Process development and micro-machining techniques have been developed to produce capsules for both the Omega and National Ignition Facility (NIF) campaigns. These capsules are gas filled up to 10 atm and consist of a machined plastic hemi-shell outer layer that accommodates various specially engineered low-density polystyrene foam cores. Machining and assembly of the two-part, step-jointed plastic hemi-shell outer layer required development of new techniques, processes, and tooling while still meeting very aggressive shot schedules for both campaigns.more » Finally, problems encountered and process improvements will be discussed that describe this very unique, complex capsule design approach through the first Omega proof-of-concept version to the larger NIF version.« less
Belke, T W
2000-05-01
Six male Wistar rats were exposed to different orders of reinforcement schedules to investigate if estimates from Herrnstein's (1970) single-operant matching law equation would vary systematically with schedule order. Reinforcement schedules were arranged in orders of increasing and decreasing reinforcement rate. Subsequently, all rats were exposed to a single reinforcement schedule within a session to determine within-session changes in responding. For each condition, the operant was lever pressing and the reinforcing consequence was the opportunity to run for 15 s. Estimates of k and R(O) were higher when reinforcement schedules were arranged in order of increasing reinforcement rate. Within a session on a single reinforcement schedule, response rates increased between the beginning and the end of a session. A positive correlation between the difference in parameters between schedule orders and the difference in response rates within a session suggests that the within-session change in response rates may be related to the difference in the asymptotes. These results call into question the validity of parameter estimates from Herrnstein's (1970) equation when reinforcer efficacy changes within a session.
NASA Astrophysics Data System (ADS)
Buddala, Raviteja; Mahapatra, Siba Sankar
2017-11-01
Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.
A multi-group and preemptable scheduling of cloud resource based on HTCondor
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan
2017-10-01
Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.
Choice between Single and Multiple Reinforcers in Concurrent-Chains Schedules
ERIC Educational Resources Information Center
Mazur, James E.
2006-01-01
Pigeons responded on concurrent-chains schedules with equal variable-interval schedules as initial links. One terminal link delivered a single reinforcer after a fixed delay, and the other terminal link delivered either three or five reinforcers, each preceded by a fixed delay. Some conditions included a postreinforcer delay after the single…
The evolution of machining-induced surface of single-crystal FCC copper via nanoindentation
NASA Astrophysics Data System (ADS)
Zhang, Lin; Huang, Hu; Zhao, Hongwei; Ma, Zhichao; Yang, Yihan; Hu, Xiaoli
2013-05-01
The physical properties of the machining-induced new surface depend on the performance of the initial defect surface and deformed layer in the subsurface of the bulk material. In this paper, three-dimensional molecular dynamics simulations of nanoindentation are preformed on the single-point diamond turning surface of single-crystal copper comparing with that of pristine single-crystal face-centered cubic copper. The simulation results indicate that the nucleation of dislocations in the nanoindentation test on the machining-induced surface and pristine single-crystal copper is different. The dislocation embryos are gradually developed from the sites of homogeneous random nucleation around the indenter in the pristine single-crystal specimen, while the dislocation embryos derived from the vacancy-related defects are distributed in the damage layer of the subsurface beneath the machining-induced surface. The results show that the hardness of the machining-induced surface is softer than that of pristine single-crystal copper. Then, the nanocutting simulations are performed along different crystal orientations on the same crystal surface. It is shown that the crystal orientation directly influences the dislocation formation and distribution of the machining-induced surface. The crystal orientation of nanocutting is further verified to affect both residual defect generations and their propagation directions which are important in assessing the change of mechanical properties, such as hardness and Young's modulus, after nanocutting process.
Better approximation guarantees for job-shop scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, L.A.; Paterson, M.; Srinivasan, A.
1997-06-01
Job-shop scheduling is a classical NP-hard problem. Shmoys, Stein & Wein presented the first polynomial-time approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work, and present further improvements for some important NP-hard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NP-hard special cases.
The Use of the MASCOT Philosophy for the Construction of Ada Programs,
1983-10-01
dependent units must be recompiled. Because of Ada’s commitment to abstract data types tasks are treated as data types with certain restrictions. A task...3.3.3.1.4 End of Slice Action The scheduling algorithm determines, for each type of Slice termination, how the Scheduler treats Activities whose Slice has...Pools. The MASCOT Machine treats them as constructionally equivalent (refer 3.3.1.1.1). Because of the constraints brought in by the formulation of
Balancing Contention and Synchronization on the Intel Paragon
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Nicol, David M.
1996-01-01
The Intel Paragon is a mesh-connected distributed memory parallel computer. It uses an oblivious and deterministic message routing algorithm: this permits us to develop highly optimized schedules for frequently needed communication patterns. The complete exchange is one such pattern. Several approaches are available for carrying it out on the mesh. We study an algorithm developed by Scott. This algorithm assumes that a communication link can carry one message at a time and that a node can only transmit one message at a time. It requires global synchronization to enforce a schedule of transmissions. Unfortunately global synchronization has substantial overhead on the Paragon. At the same time the powerful interconnection mechanism of this machine permits 2 or 3 messages to share a communication link with minor overhead. It can also overlap multiple message transmission from the same node to some extent. We develop a generalization of Scott's algorithm that executes complete exchange with a prescribed contention. Schedules that incur greater contention require fewer synchronization steps. This permits us to tradeoff contention against synchronization overhead. We describe the performance of this algorithm and compare it with Scott's original algorithm as well as with a naive algorithm that does not take interconnection structure into account. The Bounded contention algorithm is always better than Scott's algorithm and outperforms the naive algorithm for all but the smallest message sizes. The naive algorithm fails to work on meshes larger than 12 x 12. These results show that due consideration of processor interconnect and machine performance parameters is necessary to obtain peak performance from the Paragon and its successor mesh machines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J.R.; Netrologic, Inc., San Diego, CA)
1988-01-01
Topics presented include integrating neural networks and expert systems, neural networks and signal processing, machine learning, cognition and avionics applications, artificial intelligence and man-machine interface issues, real time expert systems, artificial intelligence, and engineering applications. Also considered are advanced problem solving techniques, combinational optimization for scheduling and resource control, data fusion/sensor fusion, back propagation with momentum, shared weights and recurrency, automatic target recognition, cybernetics, optical neural networks.
Third Conference on Artificial Intelligence for Space Applications, part 1
NASA Technical Reports Server (NTRS)
Denton, Judith S. (Compiler); Freeman, Michael S. (Compiler); Vereen, Mary (Compiler)
1987-01-01
The application of artificial intelligence to spacecraft and aerospace systems is discussed. Expert systems, robotics, space station automation, fault diagnostics, parallel processing, knowledge representation, scheduling, man-machine interfaces and neural nets are among the topics discussed.
30 CFR 75.209 - Automated Temporary Roof Support (ATRS) systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
... paragraph shall be met according to the following schedule: (1) All new machines ordered after March 28... the left, right or beyond the ATRS system, shall not exceed 5 feet. (e) Each ATRS system shall meet...
30 CFR 75.209 - Automated Temporary Roof Support (ATRS) systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... paragraph shall be met according to the following schedule: (1) All new machines ordered after March 28... the left, right or beyond the ATRS system, shall not exceed 5 feet. (e) Each ATRS system shall meet...
Structural Benchmark Testing of Superalloy Lattice Block Subelements Completed
NASA Technical Reports Server (NTRS)
2004-01-01
Superalloy lattice block panels, which are produced directly by investment casting, are composed of thin ligaments arranged in three-dimensional triangulated trusslike structures (see the preceding figure). Optionally, solid panel face sheets can be formed integrally during casting. In either form, lattice block panels can easily be produced with weights less than 25 percent of the mass of a solid panel. Inconel 718 (IN 718) and MarM-247 superalloy lattice block panels have been developed under NASA's Ultra-Efficient Engine Technology Project and Higher Operating Temperature Propulsion Components Project to take advantage of the superalloys' high strength and elevated temperature capability with the inherent light weight and high stiffness of the lattice architecture (ref. 1). These characteristics are important in the future development of turbine engine components. Casting quality and structural efficiency were evaluated experimentally using small beam specimens machined from the cast and heat treated 140- by 300- by 11-mm panels. The matrix of specimens included samples of each superalloy in both open-celled and single-face-sheet configurations, machined from longitudinal, transverse, and diagonal panel orientations. Thirty-five beam subelements were tested in Glenn's Life Prediction Branch's material test machine at room temperature and 650 C under both static (see the following photograph) and cyclic load conditions. Surprisingly, test results exceeded initial linear elastic analytical predictions. This was likely a result of the formation of plastic hinges and redundancies inherent in lattice block geometry, which was not considered in the finite element models. The value of a single face sheet was demonstrated by increased bending moment capacity, where the face sheet simultaneously increased the gross section modulus and braced the compression ligaments against early buckling as seen in open-cell specimens. Preexisting flaws in specimens were not a discriminator in flexural, shear, or stiffness measurements, again because of redundant load paths available in the lattice block structure. Early test results are available in references 2 and 3; more complete analyses are scheduled for publication in 2004.
More reliable protein NMR peak assignment via improved 2-interval scheduling.
Chen, Zhi-Zhong; Lin, Guohui; Rizzi, Romeo; Wen, Jianjun; Xu, Dong; Xu, Ying; Jiang, Tao
2005-03-01
Protein NMR peak assignment refers to the process of assigning a group of "spin systems" obtained experimentally to a protein sequence of amino acids. The automation of this process is still an unsolved and challenging problem in NMR protein structure determination. Recently, protein NMR peak assignment has been formulated as an interval scheduling problem (ISP), where a protein sequence P of amino acids is viewed as a discrete time interval I (the amino acids on P one-to-one correspond to the time units of I), each subset S of spin systems that are known to originate from consecutive amino acids from P is viewed as a "job" j(s), the preference of assigning S to a subsequence P of consecutive amino acids on P is viewed as the profit of executing job j(s) in the subinterval of I corresponding to P, and the goal is to maximize the total profit of executing the jobs (on a single machine) during I. The interval scheduling problem is max SNP-hard in general; but in the real practice of protein NMR peak assignment, each job j(s) usually requires at most 10 consecutive time units, and typically the jobs that require one or two consecutive time units are the most difficult to assign/schedule. In order to solve these most difficult assignments, we present an efficient 13/7-approximation algorithm for the special case of the interval scheduling problem where each job takes one or two consecutive time units. Combining this algorithm with a greedy filtering strategy for handling long jobs (i.e., jobs that need more than two consecutive time units), we obtain a new efficient heuristic for protein NMR peak assignment. Our experimental study shows that the new heuristic produces the best peak assignment in most of the cases, compared with the NMR peak assignment algorithms in the recent literature. The above algorithm is also the first approximation algorithm for a nontrivial case of the well-known interval scheduling problem that breaks the ratio 2 barrier.
THRESHOLD LOGIC SYNTHESIS OF SEQUENTIAL MACHINES.
The application of threshold logic to the design of sequential machines is the subject of this research. A single layer of threshold logic units in...advantages of fewer components because of the use of threshold logic, along with very high-speed operation resulting from the use of only a single layer of...logic. In some instances, namely for asynchronous machines, the only delay need be the natural delay of the single layer of threshold elements. It is
JIGSAW: Preference-directed, co-operative scheduling
NASA Technical Reports Server (NTRS)
Linden, Theodore A.; Gaw, David
1992-01-01
Techniques that enable humans and machines to cooperate in the solution of complex scheduling problems have evolved out of work on the daily allocation and scheduling of Tactical Air Force resources. A generalized, formal model of these applied techniques is being developed. It is called JIGSAW by analogy with the multi-agent, constructive process used when solving jigsaw puzzles. JIGSAW begins from this analogy and extends it by propagating local preferences into global statistics that dynamically influence the value and variable ordering decisions. The statistical projections also apply to abstract resources and time periods--allowing more opportunities to find a successful variable ordering by reserving abstract resources and deferring the choice of a specific resource or time period.
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2014-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.
DORCA computer program. Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1971-01-01
The Dynamic Operational Requirements and Cost Analysis Program (DORCA) was written to provide a top level analysis tool for NASA. DORCA relies on a man-machine interaction to optimize results based on external criteria. DORCA relies heavily on outside sources to provide cost information and vehicle performance parameters as the program does not determine these quantities but rather uses them. Given data describing missions, vehicles, payloads, containers, space facilities, schedules, cost values and costing procedures, the program computes flight schedules, cargo manifests, vehicle fleet requirements, acquisition schedules and cost summaries. The program is designed to consider the Earth Orbit, Lunar, Interplanetary and Automated Satellite Programs. A general outline of the capabilities of the program are provided.
A Solution Method of Job-shop Scheduling Problems by the Idle Time Shortening Type Genetic Algorithm
NASA Astrophysics Data System (ADS)
Ida, Kenichi; Osawa, Akira
In this paper, we propose a new idle time shortening method for Job-shop scheduling problems (JSPs). We insert its method into a genetic algorithm (GA). The purpose of JSP is to find a schedule with the minimum makespan. We suppose that it is effective to reduce idle time of a machine in order to improve the makespan. The left shift is a famous algorithm in existing algorithms for shortening idle time. The left shift can not arrange the work to idle time. For that reason, some idle times are not shortened by the left shift. We propose two kinds of algorithms which shorten such idle time. Next, we combine these algorithms and the reversal of a schedule. We apply GA with its algorithm to benchmark problems and we show its effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Randolph, Randall Blaine; Oertel, John A.; Schmidt, Derek William
For this study, machined CH hemi-shell ablator capsules have been successfully produced by the MST-7 Target Fabrication Team at Los Alamos National Laboratory. Process development and micro-machining techniques have been developed to produce capsules for both the Omega and National Ignition Facility (NIF) campaigns. These capsules are gas filled up to 10 atm and consist of a machined plastic hemi-shell outer layer that accommodates various specially engineered low-density polystyrene foam cores. Machining and assembly of the two-part, step-jointed plastic hemi-shell outer layer required development of new techniques, processes, and tooling while still meeting very aggressive shot schedules for both campaigns.more » Finally, problems encountered and process improvements will be discussed that describe this very unique, complex capsule design approach through the first Omega proof-of-concept version to the larger NIF version.« less
ERIC Educational Resources Information Center
Tiger, Jeffrey H.; Toussaint, Karen A.; Roath, Christopher T.
2010-01-01
The current study compared the effects of choice and no-choice reinforcement conditions on the task responding of 3 children with autism across 2 single-operant paradigm reinforcer assessments. The first assessment employed simple fixed-ratio (FR) schedules; the second used progressive-ratio (PR) schedules. The latter assessment identified the…
Multi-objective decision-making model based on CBM for an aircraft fleet
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin
2018-04-01
Modern production management patterns, in which multi-unit (e.g., a fleet of aircrafts) are managed in a holistic manner, have brought new challenges for multi-unit maintenance decision making. To schedule a good maintenance plan, not only does the individual machine maintenance have to be considered, but also the maintenance of the other individuals have to be taken into account. Since most condition-based maintenance researches for aircraft focused on solely reducing maintenance cost or maximizing the availability of single aircraft, as well as considering that seldom researches concentrated on both the two objectives: minimizing cost and maximizing the availability of a fleet (total number of available aircraft in fleet), a multi-objective decision-making model based on condition-based maintenance concentrated both on the above two objectives is established. Furthermore, in consideration of the decision maker may prefer providing the final optimal result in the form of discrete intervals instead of a set of points (non-dominated solutions) in real decision-making problem, a novel multi-objective optimization method based on support vector regression is proposed to solve the above multi-objective decision-making model. Finally, a case study regarding a fleet is conducted, with the results proving that the approach efficiently generates outcomes that meet the schedule requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... form of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g... programs of scholarly research. (5) Non-commercial scientific institution means an institution that is not...
Code of Federal Regulations, 2012 CFR
2012-07-01
... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... form of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g... programs of scholarly research. (5) Non-commercial scientific institution means an institution that is not...
Rural Renaissance. Revitalizing Small High Schools.
ERIC Educational Resources Information Center
Ford, Edmund A.
Written in 1961, this document presents the rationales and applications of what were and still are, in most instances, considered innovative practices. Subjects discussed are building designs, teaching machines, educational television, flexible scheduling, multiple classes and small-group techniques, teacher assistants, shared services, and…
NASA Astrophysics Data System (ADS)
Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.
2018-02-01
The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.
A Study on Real-Time Scheduling Methods in Holonic Manufacturing Systems
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Taimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new architectures of manufacturing systems have been proposed to realize flexible control structures of the manufacturing systems, which can cope with the dynamic changes in the volume and the variety of the products and also the unforeseen disruptions, such as failures of manufacturing resources and interruptions by high priority jobs. They are so called as the autonomous distributed manufacturing system, the biological manufacturing system and the holonic manufacturing system. Rule-based scheduling methods were proposed and applied to the real-time production scheduling problems of the HMS (Holonic Manufacturing System) in the previous report. However, there are still remaining problems from the viewpoint of the optimization of the whole production schedules. New procedures are proposed, in the present paper, to select the production schedules, aimed at generating effective production schedules in real-time. The proposed methods enable the individual holons to select suitable machining operations to be carried out in the next time period. Coordination process among the holons is also proposed to carry out the coordination based on the effectiveness values of the individual holons.
Srivastava, Shubhika; Allada, Vivekanand; Younoszai, Adel; Lopez, Leo; Soriano, Brian D; Fleishman, Craig E; Van Hoever, Andrea M; Lai, Wyman W
2016-10-01
The American Society of Echocardiography Committee on Pediatric Echocardiography Laboratory Productivity aimed to study factors that could influence the clinical productivity of physicians and sonographers and assess longitudinal trends for the same. The first survey results indicated that productivity correlated with the total volume of echocardiograms. Survey questions were designed to assess productivity for (1) physician full-time equivalent (FTE) allocated to echocardiography reading (echocardiograms per physician FTE per day), (2) sonographer FTE (echocardiograms per sonographer FTE per year), and (3) machine utilization (echocardiograms per machine per year). Questions were also posed to assess work flow and workforce. For fiscal year 2013 or academic year 2012-2013, the mean number of total echocardiograms-including outreach, transthoracic, fetal, and transesophageal echocardiograms-per physician FTE per day was 14.3 ± 5.9, the mean number of echocardiograms per sonographer FTE per year was 1,056 ± 441, and the mean number of echocardiograms per machine per year was 778 ± 303. Both physician and sonographer productivity was higher at high-volume surgical centers and with echocardiography slots scheduled concordantly with clinic visits. Having an advanced imaging fellow and outpatient sedation correlated negatively with clinical laboratory productivity. Machine utilization was greater in laboratories with higher sonographer and physician productivity and lower for machines obtained before 2009. Measures of pediatric echocardiography laboratory staff productivity and machine utilization were shown to correlate positively with surgical volume, total echocardiography volumes, and concordant echocardiography scheduling; the same measures correlated negatively with having an advanced imaging fellow and outpatient sedation. There has been no significant change in staff productivity noted over two Committee on Pediatric Echocardiography Laboratory Productivity survey cycles, suggesting that hiring practices have matched laboratory volume increases. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Diamond turning of Si and Ge single crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blake, P.; Scattergood, R.O.
Single-point diamond turning studies have been completed on Si and Ge crystals. A new process model was developed for diamond turning which is based on a critical depth of cut for plastic flow-to-brittle fracture transitions. This concept, when combined with the actual machining geometry for single-point turning, predicts that {open_quotes}ductile{close_quotes} machining is a combined action of plasticity and fracture. Interrupted cutting experiments also provide a meant to directly measure the critical depth parameter for given machining conditions.
Strength and endurance training of an individual with left upper and lower limb amputations.
Donachy, J E; Brannon, K D; Hughes, L S; Seahorn, J; Crutcher, T T; Christian, E L
2004-04-22
The purpose of this article is to describe the development of a strength and endurance training programme designed to prepare an individual with a left glenohumeral disarticulation and transtibial amputation for a bike trip across the USA. The subject was scheduled for training three times per week over a two-month period followed by two times per week for an additional two months. Training consisted of a resistance training circuit using variable resistance machines, cycling using a recumbent stationary bike, and core stability training using stability ball exercises. Changes in strength were assessed using 10 RM tests on the resistance machines and changes in peak VO(2) were monitored utilizing the Cosmed K4b pulmonary function tester. The subject demonstrated a 30.3% gain in peak VO(2). The subject's 10 RM for left single limb leg press increased 36.8% and gains of at least 7.7% were seen for all other muscle groups tested. The strength and endurance training programme adapted to compensate for this subject's limb losses was effective in increasing both strength and peak VO(2). Adapting exercise programmes to compensate for limb loss may allow individuals with amputations to participate in physically challenging activities that otherwise may not be available to them.
Origin of acoustic emission produced during single point machining
NASA Astrophysics Data System (ADS)
Heiple, C. R.; Carpenter, S. H.; Armentrout, D. L.
1991-05-01
Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emission produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less
Non-symmetric approach to single-screw expander and compressor modeling
NASA Astrophysics Data System (ADS)
Ziviani, Davide; Groll, Eckhard A.; Braun, James E.; Horton, W. Travis; De Paepe, M.; van den Broek, M.
2017-08-01
Single-screw type volumetric machines are employed both as compressors in refrigeration systems and, more recently, as expanders in organic Rankine cycle (ORC) applications. The single-screw machine is characterized by having a central grooved rotor and two mating toothed starwheels that isolate the working chambers. One of the main features of such machine is related to the simultaneous occurrence of the compression or expansion processes on both sides of the main rotor which results in a more balanced loading on the main shaft bearings with respect to twin-screw machines. However, the meshing between starwheels and main rotor is a critical aspect as it heavily affects the volumetric performance of the machine. To allow flow interactions between the two sides of the rotor, a non-symmetric modelling approach has been established to obtain a more comprehensive model of the single-screw machine. The resulting mechanistic model includes in-chamber governing equations, leakage flow models, heat transfer mechanisms, viscous and mechanical losses. Forces and moments balances are used to estimate the loads on the main shaft bearings as well as on the starwheel bearings. An 11 kWe single-screw expander (SSE) adapted from an air compressor operating with R245fa as working fluid is used to validate the model. A total of 60 steady-steady points at four different rotational speeds have been collected to characterize the performance of the machine. The maximum electrical power output and overall isentropic efficiency measured were 7.31 kW and 51.91%, respectively.
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.
Scheduling revisited workstations in integrated-circuit fabrication
NASA Technical Reports Server (NTRS)
Kline, Paul J.
1992-01-01
The cost of building new semiconductor wafer fabrication factories has grown rapidly, and a state-of-the-art fab may cost 250 million dollars or more. Obtaining an acceptable return on this investment requires high productivity from the fabrication facilities. This paper describes the Photo Dispatcher system which was developed to make machine-loading recommendations on a set of key fab machines. Dispatching policies that generally perform well in job shops (e.g., Shortest Remaining Processing Time) perform poorly for workstations such as photolithography which are visited several times by the same lot of silicon wafers. The Photo Dispatcher evaluates the history of workloads throughout the fab and identifies bottleneck areas. The scheduler then assigns priorities to lots depending on where they are headed after photolithography. These priorities are designed to avoid starving bottleneck workstations and to give preference to lots that are headed to areas where they can be processed with minimal waiting. Other factors considered by the scheduler to establish priorities are the nearness of a lot to the end of its process flow and the time that the lot has already been waiting in queue. Simulations that model the equipment and products in one of Texas Instrument's wafer fabs show the Photo Dispatcher can produce a 10 percent improvement in the time required to fabricate integrated circuits.
Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N , in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.
Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687
Skipping Strategy (SS) for Initial Population of Job-Shop Scheduling Problem
NASA Astrophysics Data System (ADS)
Abdolrazzagh-Nezhad, M.; Nababan, E. B.; Sarim, H. M.
2018-03-01
Initial population in job-shop scheduling problem (JSSP) is an essential step to obtain near optimal solution. Techniques used to solve JSSP are computationally demanding. Skipping strategy (SS) is employed to acquire initial population after sequence of job on machine and sequence of operations (expressed in Plates-jobs and mPlates-jobs) are determined. The proposed technique is applied to benchmark datasets and the results are compared to that of other initialization techniques. It is shown that the initial population obtained from the SS approach could generate optimal solution.
Some single-piston closed-cycle machines and Peter Tailer's thermal lag engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, C.D.
1993-01-01
Peter Tailer has devised, built, and operated a beautifully simple engine with a closed working gas cycle, external heating, and only a single piston. The aim of this paper is to cast some light on the possible modes of operation for his machine. The methods develops to analyze certain aspects of Stirling cycle engines, and especially the thermodynamic losses incurred in systems that are neither perfectly isothermal nor perfectly adiabatic, can be applied to Tailer's system. The results identify two idealized cycles fr such machines; relate those cycles to a single piston, ported cylinder machine proposed earlier; and offer amore » possible explanation for the success of the thermal lag engine.« less
U. S. fusion programs: Struggling to stay in the game
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, M.
Funding for the US fusion energy program has suffered and will probably continue to suffer major cuts. A committee hand-picked by Energy Secretary James Watkins urged the Department of Energy to mount an aggressive program to develop fusion power, but congress cut funding from $323 million in 1990 to $275 million in 1991. This portends dire conditions for fusion research and development. Projects to receive top priority are concerned with the tokamaks and to keep the next big machine, the Burning Plasma Experiment, scheduled for beginning of construction in 1993 on schedule. Secretary Watkins is said to want to keepmore » the International Thermonuclear Energy Reactor (ITER) on schedule. ITER would follow the Burning Plasma Experiment.« less
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2017-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237
NASA Astrophysics Data System (ADS)
Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz
2017-10-01
Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.
36 CFR § 902.82 - Fee schedule.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operating duplicating machinery. Not included in direct costs are overhead expenses such as costs of space... form of paper copy, microform, audio-visual materials, or machine-readable documentation (e.g... programs of scholarly research. (5) Non-commercial scientific institution means an institution that is not...
Integrated flexible manufacturing program for manufacturing automation and rapid prototyping
NASA Technical Reports Server (NTRS)
Brooks, S. L.; Brown, C. W.; King, M. S.; Simons, W. R.; Zimmerman, J. J.
1993-01-01
The Kansas City Division of Allied Signal Inc., as part of the Integrated Flexible Manufacturing Program (IFMP), is developing an integrated manufacturing environment. Several systems are being developed to produce standards and automation tools for specific activities within the manufacturing environment. The Advanced Manufacturing Development System (AMDS) is concentrating on information standards (STEP) and product data transfer; the Expert Cut Planner system (XCUT) is concentrating on machining operation process planning standards and automation capabilities; the Advanced Numerical Control system (ANC) is concentrating on NC data preparation standards and NC data generation tools; the Inspection Planning and Programming Expert system (IPPEX) is concentrating on inspection process planning, coordinate measuring machine (CMM) inspection standards and CMM part program generation tools; and the Intelligent Scheduling and Planning System (ISAPS) is concentrating on planning and scheduling tools for a flexible manufacturing system environment. All of these projects are working together to address information exchange, standardization, and information sharing to support rapid prototyping in a Flexible Manufacturing System (FMS) environment.
CP Violation and the Future of Flavor Physics
NASA Astrophysics Data System (ADS)
Kiesling, Christian
2009-12-01
With the nearing completion of the first-generation experiments at asymmetric e+e- colliders running at the Υ(4S) resonance ("B-Factories") a new era of high luminosity machines is at the horizon. We report here on the plans at KEK in Japan to upgrade the KEKB machine ("SuperKEKB") with the goal of achieving an instantaneous luminosity exceeding 8×1035 cm-2 s-1, which is almost two orders of magnitude higher than KEKB. Together with the machine, the Belle detector will be upgraded as well ("Belle-II"), with significant improvements to increase its background tolerance as well as improving its physics performance. The new generation of experiments is scheduled to take first data in the year 2013.
Machine learning of network metrics in ATLAS Distributed Data Management
NASA Astrophysics Data System (ADS)
Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration
2017-10-01
The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.
30 CFR 18.94 - Application for field approval; contents of application.
Code of Federal Regulations, 2011 CFR
2011-07-01
... approval or certification has been issued under the provisions of Bureau of Mines Schedules 2D, 2E, 2F, or... under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, photographs or a single layout drawing which clearly... certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, a single layout drawing which clearly identifies...
30 CFR 18.94 - Application for field approval; contents of application.
Code of Federal Regulations, 2012 CFR
2012-07-01
... approval or certification has been issued under the provisions of Bureau of Mines Schedules 2D, 2E, 2F, or... under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, photographs or a single layout drawing which clearly... certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, a single layout drawing which clearly identifies...
30 CFR 18.94 - Application for field approval; contents of application.
Code of Federal Regulations, 2013 CFR
2013-07-01
... approval or certification has been issued under the provisions of Bureau of Mines Schedules 2D, 2E, 2F, or... under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, photographs or a single layout drawing which clearly... certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, a single layout drawing which clearly identifies...
30 CFR 18.94 - Application for field approval; contents of application.
Code of Federal Regulations, 2014 CFR
2014-07-01
... approval or certification has been issued under the provisions of Bureau of Mines Schedules 2D, 2E, 2F, or... under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, photographs or a single layout drawing which clearly... certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, a single layout drawing which clearly identifies...
30 CFR 18.94 - Application for field approval; contents of application.
Code of Federal Regulations, 2010 CFR
2010-07-01
... approval or certification has been issued under the provisions of Bureau of Mines Schedules 2D, 2E, 2F, or... under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, photographs or a single layout drawing which clearly... certified under Bureau of Mines Schedule 2D, 2E, 2F, or 2G, a single layout drawing which clearly identifies...
OptiCentric lathe centering machine
NASA Astrophysics Data System (ADS)
Buß, C.; Heinisch, J.
2013-09-01
High precision optics depend on precisely aligned lenses. The shift and tilt of individual lenses as well as the air gap between elements require accuracies in the single micron regime. These accuracies are hard to meet with traditional assembly methods. Instead, lathe centering can be used to machine the mount with respect to the optical axis. Using a diamond turning process, all relevant errors of single mounted lenses can be corrected in one post-machining step. Building on the OptiCentric® and OptiSurf® measurement systems, Trioptics has developed their first lathe centering machines. The machine and specific design elements of the setup will be shown. For example, the machine can be used to turn optics for i-line steppers with highest precision.
ERIC Educational Resources Information Center
South Carolina State Dept. of Education, Columbia. Office of Vocational Education.
This module on the knife machine, one in a series dealing with industrial sewing machines, their attachments, and operation, covers one topic: performing special operations on the knife machine (a single needle or multi-needle machine which sews and cuts at the same time). These components are provided: an introduction, directions, an objective,…
Radiation tolerant combinational logic cell
NASA Technical Reports Server (NTRS)
Maki, Gary R. (Inventor); Whitaker, Sterling (Inventor); Gambles, Jody W. (Inventor)
2009-01-01
A system has a reduced sensitivity to Single Event Upset and/or Single Event Transient(s) compared to traditional logic devices. In a particular embodiment, the system includes an input, a logic block, a bias stage, a state machine, and an output. The logic block is coupled to the input. The logic block is for implementing a logic function, receiving a data set via the input, and generating a result f by applying the data set to the logic function. The bias stage is coupled to the logic block. The bias stage is for receiving the result from the logic block and presenting it to the state machine. The state machine is coupled to the bias stage. The state machine is for receiving, via the bias stage, the result generated by the logic block. The state machine is configured to retain a state value for the system. The state value is typically based on the result generated by the logic block. The output is coupled to the state machine. The output is for providing the value stored by the state machine. Some embodiments of the invention produce dual rail outputs Q and Q'. The logic block typically contains combinational logic and is similar, in size and transistor configuration, to a conventional CMOS combinational logic design. However, only a very small portion of the circuits of these embodiments, is sensitive to Single Event Upset and/or Single Event Transients.
Origin of acoustic emission produced during single point machining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heiple, C.R,.; Carpenter, S.H.; Armentrout, D.L.
1991-01-01
Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emissionmore » produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent. 21 refs., 19 figs., 4 tabs.« less
Onboard planning for geological investigations using a rover team
NASA Technical Reports Server (NTRS)
Estlin, Tara; Gaines, Daniel; Fisher, Forest; Castano, Rebecca
2004-01-01
This paper describes an integrated system for coordinating multiple rover behavior with the overall goal of collecting planetary surface data. The Multi-Rover Integrated Science Understanding System (MISUS) combines techniques from planning and scheduling with machine learning to perform autonomous scientific exploration with cooperating rovers.
Some single-piston closed-cycle machines and Peter Tailer`s thermal lag engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, C.D.
1993-06-01
Peter Tailer has devised, built, and operated a beautifully simple engine with a closed working gas cycle, external heating, and only a single piston. The aim of this paper is to cast some light on the possible modes of operation for his machine. The methods develops to analyze certain aspects of Stirling cycle engines, and especially the thermodynamic losses incurred in systems that are neither perfectly isothermal nor perfectly adiabatic, can be applied to Tailer`s system. The results identify two idealized cycles fr such machines; relate those cycles to a single piston, ported cylinder machine proposed earlier; and offer amore » possible explanation for the success of the thermal lag engine.« less
Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin
2018-05-04
The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry.
Single-Pass Serial Scheduling Heuristic for Eglin AFB Range Services Division Schedule
2009-06-01
scheduling tool for this RCPSP. Research on a schedule improvement metaheuristic and coding of the complete algorithm is required before it can be...a schedule better by applying metaheuristic improvement algorithms to a feasible schedule after it is created. 2.5.1. Greedy Algorithm The...next available position, the algorithm will not utilize all the available range time and manpower. An improvement metaheuristic is required to
Relative Performance of Hardwood Sawing Machines
Philip H. Steele; Michael W. Wade; Steven H. Bullard; Philip A. Araman
1991-01-01
Only limited information has been available to hardwood sawmillers on the performance of their sawing machines. This study analyzes a large database of individual machine studies to provide detailed information on 6 machine types. These machine types were band headrig, circular headrig, band linebar resaw, vertical band splitter resaw, single arbor gang resaw and...
32 CFR 1662.6 - Fee schedule; waiver of fees.
Code of Federal Regulations, 2012 CFR
2012-07-01
... as costs of space, and heating or lighting the facility in which the records are stored. (2) The term... copies may take the form of paper copy, microform, audio-visual materials, or machine readable... institution of vocational education, which operates a program or programs of scholarly research. (7) The term...
32 CFR 1662.6 - Fee schedule; waiver of fees.
Code of Federal Regulations, 2014 CFR
2014-07-01
... as costs of space, and heating or lighting the facility in which the records are stored. (2) The term... copies may take the form of paper copy, microform, audio-visual materials, or machine readable... institution of vocational education, which operates a program or programs of scholarly research. (7) The term...
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
2008-06-01
capacity planning; • Electrical generation capacity planning; • Machine scheduling; • Freight scheduling; • Dairy farm expansion planning...Support Systems and Multi Criteria Decision Analysis Products A.2.11.2.2.1 ELECTRE IS ELECTRE IS is a generalization of ELECTRE I. It is a...criteria, ELECTRE IS supports the user in the process of selecting one alternative or a subset of alternatives. The method consists of two parts
NASA Astrophysics Data System (ADS)
Lary, D. J.
2013-12-01
A BigData case study is described where multiple datasets from several satellites, high-resolution global meteorological data, social media and in-situ observations are combined using machine learning on a distributed cluster using an automated workflow. The global particulate dataset is relevant to global public health studies and would not be possible to produce without the use of the multiple big datasets, in-situ data and machine learning.To greatly reduce the development time and enhance the functionality a high level language capable of parallel processing has been used (Matlab). A key consideration for the system is high speed access due to the large data volume, persistence of the large data volumes and a precise process time scheduling capability.
FEM analysis of an single stator dual PM rotors axial synchronous machine
NASA Astrophysics Data System (ADS)
Tutelea, L. N.; Deaconu, S. I.; Popa, G. N.
2017-01-01
The actual e - continuously variable transmission (e-CVT) solution for the parallel Hybrid Electric Vehicle (HEV) requires two electric machines, two inverters, and a planetary gear. A distinct electric generator and a propulsion electric motor, both with full power converters, are typical for a series HEV. In an effort to simplify the planetary-geared e-CVT for the parallel HEV or the series HEV we hereby propose to replace the basically two electric machines and their two power converters by a single, axial-air-gap, electric machine central stator, fed from a single PWM converter with dual frequency voltage output and two independent PM rotors. The proposed topologies, the magneto-motive force analysis and quasi 3D-FEM analysis are the core of the paper.
NASA Astrophysics Data System (ADS)
Zheng, Ping; Sui, Yi; Tong, Chengde; Bai, Jingang; Yu, Bin; Lin, Fei
2014-05-01
This paper investigates a novel single-phase flux-switching permanent-magnet (PM) linear machine used for free-piston Stirling engines. The machine topology and operating principle are studied. A flux-switching PM linear machine is designed based on the quasi-sinusoidal speed characteristic of the resonant piston. Considering the performance of back electromotive force and thrust capability, some leading structural parameters, including the air gap length, the PM thickness, the ratio of the outer radius of mover to that of stator, the mover tooth width, the stator tooth width, etc., are optimized by finite element analysis. Compared with conventional three-phase moving-magnet linear machine, the proposed single-phase flux-switching topology shows advantages in less PM use, lighter mover, and higher volume power density.
NASA Astrophysics Data System (ADS)
Liu, Weibo; Jin, Yan; Price, Mark
2016-10-01
A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.
Development of a low energy micro sheet forming machine
NASA Astrophysics Data System (ADS)
Razali, A. R.; Ann, C. T.; Shariff, H. M.; Kasim, N. I.; Musa, M. A.; Ahmad, A. F.
2017-10-01
It is expected that with the miniaturization of materials being processed, energy consumption is also being `miniaturized' proportionally. The focus of this study was to design a low energy micro-sheet-forming machine for thin sheet metal application and fabricate a low direct current powered micro-sheet-forming machine. A prototype of low energy system for a micro-sheet-forming machine which includes mechanical and electronic elements was developed. The machine was tested for its performance in terms of natural frequency, punching forces, punching speed and capability, energy consumption (single punch and frequency-time based). Based on the experiments, the machine can do 600 stroke per minute and the process is unaffected by the machine's natural frequency. It was also found that sub-Joule of power was required for a single stroke of punching/blanking process. Up to 100micron thick carbon steel shim was successfully tested and punched. It concludes that low power forming machine is feasible to be developed and be used to replace high powered machineries to form micro-products/parts.
Hamm, P C; Bakker, E J; van den Berg, A P; van den Aardweg, G J; Visser, A G; Levendag, P C
2000-07-01
An experimental brachytherapy model has been developed to study acute and late normal tissue reactions as a tool to examine the effects of clinically relevant multifractionation schedules. Pig skin was used as a model since its morphology, structure, cell kinetics and radiation-induced responses are similar to human skin. Brachytherapy was performed using a microSelectron high dose rate (HDR) afterloading machine with a single stepping source and a custom-made template. In this study the acute epidermal reactions of erythema and moist desquamation and the late dermal reactions of dusky mauve erythema and necrosis were evaluated after single doses of irradiation over a follow-up period of 16 weeks. The major aims of this work were: (a) to compare the effects of iridium-192 (192Ir) irradiation with effects after X-irradiation; (b) to compare the skin reactions in Yorkshire and Large White pigs; and (c) to standardize the methodology. For 192Ir irradiation with 100% isodose at the skin surface, the 95% isodose was estimated at the basal membrane, while the 80% isodose covered the dermal fat layers. After HDR 192Ir irradiation of Yorkshire pig skin the ED50 values (95% isodose) for moderate/severe erythema and moist desquamation were 24.8 Gy and 31.9 Gy, respectively. The associated mean latent period (+/- SD) was 39 +/- 7 days for both skin reactions. Late skin responses of dusky mauve erythema and dermal necrosis were characterized by ED50 values (80% isodose) of 16.3 Gy and 19.5 Gy, with latent periods of 58 +/- 7 days and 76 +/- 12 days, respectively. After X-irradiation, the incidence of the various skin reactions and their latent periods were similar. Acute and late reactions were well separated in time. The occurrence of skin reactions and the incidence of effects were comparable in Yorkshire and Large White pigs for both X-irradiation and HDR 192Ir brachytherapy. This pig skin model is feasible for future studies on clinically relevant multifractionation schedules in a brachytherapy setting.
Diamond machine tool face lapping machine
Yetter, H.H.
1985-05-06
An apparatus for shaping, sharpening and polishing diamond-tipped single-point machine tools. The isolation of a rotating grinding wheel from its driving apparatus using an air bearing and causing the tool to be shaped, polished or sharpened to be moved across the surface of the grinding wheel so that it does not remain at one radius for more than a single rotation of the grinding wheel has been found to readily result in machine tools of a quality which can only be obtained by the most tedious and costly processing procedures, and previously unattainable by simple lapping techniques.
Ali, Habiba I; Jarrar, Amjad H; Abo-El-Enen, Mostafa; Al Shamsi, Mariam; Al Ashqar, Huda
2015-05-28
Increasing the healthfulness of campus food environments is an important step in promoting healthful food choices among college students. This study explored university students' suggestions on promoting healthful food choices from campus vending machines. It also examined factors influencing students' food choices from vending machines. Peer-led semi-structured individual interviews were conducted with 43 undergraduate students (33 females and 10 males) recruited from students enrolled in an introductory nutrition course in a large national university in the United Arab Emirates. Interviews were audiotaped, transcribed, and coded to generate themes using N-Vivo software. Accessibility, peer influence, and busy schedules were the main factors influencing students' food choices from campus vending machines. Participants expressed the need to improve the nutritional quality of the food items sold in the campus vending machines. Recommendations for students' nutrition educational activities included placing nutrition tips on or beside the vending machines and using active learning methods, such as competitions on nutrition knowledge. The results of this study have useful applications in improving the campus food environment and nutrition education opportunities at the university to assist students in making healthful food choices.
Simulation-driven machine learning: Bearing fault classification
NASA Astrophysics Data System (ADS)
Sobie, Cameron; Freitas, Carina; Nicolai, Mike
2018-01-01
Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.
31 CFR 223.14 - Schedules of single risks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Schedules of single risks. 223.14 Section 223.14 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY FINANCIAL MANAGEMENT SERVICE SURETY COMPANIES DOING BUSINESS WITH THE...
31 CFR 223.14 - Schedules of single risks.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false Schedules of single risks. 223.14 Section 223.14 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY FINANCIAL MANAGEMENT SERVICE SURETY COMPANIES DOING BUSINESS WITH THE...
31 CFR 223.14 - Schedules of single risks.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false Schedules of single risks. 223.14 Section 223.14 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY FINANCIAL MANAGEMENT SERVICE SURETY COMPANIES DOING BUSINESS WITH THE...
31 CFR 223.14 - Schedules of single risks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false Schedules of single risks. 223.14 Section 223.14 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY FINANCIAL MANAGEMENT SERVICE SURETY COMPANIES DOING BUSINESS WITH THE...
31 CFR 223.14 - Schedules of single risks.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false Schedules of single risks. 223.14 Section 223.14 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE FISCAL SERVICE SURETY COMPANIES DOING BUSINESS WITH THE...
Brown, Raymond J.
1977-01-01
The present invention relates to a tool setting device for use with numerically controlled machine tools, such as lathes and milling machines. A reference position of the machine tool relative to the workpiece along both the X and Y axes is utilized by the control circuit for driving the tool through its program. This reference position is determined for both axes by displacing a single linear variable displacement transducer (LVDT) with the machine tool through a T-shaped pivotal bar. The use of the T-shaped bar allows the cutting tool to be moved sequentially in the X or Y direction for indicating the actual position of the machine tool relative to the predetermined desired position in the numerical control circuit by using a single LVDT.
Nakai, Yasushi; Takiguchi, Tetsuya; Matsui, Gakuyo; Yamaoka, Noriko; Takada, Satoshi
2017-10-01
Abnormal prosody is often evident in the voice intonations of individuals with autism spectrum disorders. We compared a machine-learning-based voice analysis with human hearing judgments made by 10 speech therapists for classifying children with autism spectrum disorders ( n = 30) and typical development ( n = 51). Using stimuli limited to single-word utterances, machine-learning-based voice analysis was superior to speech therapist judgments. There was a significantly higher true-positive than false-negative rate for machine-learning-based voice analysis but not for speech therapists. Results are discussed in terms of some artificiality of clinician judgments based on single-word utterances, and the objectivity machine-learning-based voice analysis adds to judging abnormal prosody.
Safety and Immunogenicity of Sequential Rotavirus Vaccine Schedules
Libster, Romina; McNeal, Monica; Walter, Emmanuel B.; Shane, Andi L.; Winokur, Patricia; Cress, Gretchen; Berry, Andrea A.; Kotloff, Karen L.; Sarpong, Kwabena; Turley, Christine B.; Harrison, Christopher J.; Pahud, Barbara A.; Marbin, Jyothi; Dunn, John; El-Khorazaty, Jill; Barrett, Jill
2016-01-01
BACKGROUND AND OBJECTIVES: Although both licensed rotavirus vaccines are safe and effective, it is often not possible to complete the schedule by using the same vaccine formulation. The goal of this study was to investigate the noninferiority of the immune responses to the 2 licensed rotavirus vaccines when administered as a mixed schedule compared with administering a single vaccine formulation alone. METHODS: Randomized, multicenter, open-label study. Healthy infants (6–14 weeks of age) were randomized to receive rotavirus vaccines in 1 of 5 different schedules (2 using a single vaccine for all doses, and 3 using mixed schedules). The group receiving only the monovalent rotavirus vaccine received 2 doses of vaccine and the other 4 groups received 3 doses of vaccine. Serum for immunogenicity testing was obtained 1 month after the last vaccine dose and the proportion of seropositive children (rotavirus immunoglobulin A ≥20 U/mL) were compared in all the vaccine groups. RESULTS: Between March 2011 and September 2013, 1393 children were enrolled and randomized. Immune responses to all the sequential mixed vaccine schedules were shown to be noninferior when compared with the 2 single vaccine reference groups. The proportion of children seropositive to at least 1 vaccine antigen at 1 month after vaccination ranged from 77% to 96%, and was not significantly different among all the study groups. All schedules were well tolerated. CONCLUSIONS: Mixed schedules are safe and induced comparable immune responses when compared with the licensed rotavirus vaccines given alone. PMID:26823540
Sort-Mid tasks scheduling algorithm in grid computing.
Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M
2015-11-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.
Sort-Mid tasks scheduling algorithm in grid computing
Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.
2014-01-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937
Design and implementation of a UNIX based distributed computing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, J.S.; Michael, M.W.
1994-12-31
We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have tomore » be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.« less
Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin
2018-01-01
The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry. PMID:29734699
29 CFR 1208.6 - Schedule of fees and methods of payment for services rendered.
Code of Federal Regulations, 2014 CFR
2014-07-01
... included in direct costs are overhead expenses such as costs of space and heating or lighting the facility... form of paper copy, microfilm, audiovisual materials, or machine readable documentation (e.g., magnetic... scholarly research. (7) Non-commercial scientific institution refers to an institution that is not operated...
29 CFR 1208.6 - Schedule of fees and methods of payment for services rendered.
Code of Federal Regulations, 2012 CFR
2012-07-01
... included in direct costs are overhead expenses such as costs of space and heating or lighting the facility... form of paper copy, microfilm, audiovisual materials, or machine readable documentation (e.g., magnetic... scholarly research. (7) Non-commercial scientific institution refers to an institution that is not operated...
29 CFR 1208.6 - Schedule of fees and methods of payment for services rendered.
Code of Federal Regulations, 2013 CFR
2013-07-01
... included in direct costs are overhead expenses such as costs of space and heating or lighting the facility... form of paper copy, microfilm, audiovisual materials, or machine readable documentation (e.g., magnetic... scholarly research. (7) Non-commercial scientific institution refers to an institution that is not operated...
Space station data system analysis/architecture study. Task 5: Program plan
NASA Technical Reports Server (NTRS)
1985-01-01
Cost estimates for both the on-board and ground segments of the Space Station Data System (SSDS) are presented along with summary program schedules. Advanced technology development recommendations are provided in the areas of distributed data base management, end-to-end protocols, command/resource management, and flight qualified artificial intelligence machines.
ERIC Educational Resources Information Center
Seth, Anupam
2009-01-01
Production planning and scheduling for printed circuit, board assembly has so far defied standard operations research approaches due to the size and complexity of the underlying problems, resulting in unexploited automation flexibility. In this thesis, the increasingly popular collect-and-place machine configuration is studied and the assembly…
Huang, Song; Tian, Na; Wang, Yan; Ji, Zhicheng
2016-01-01
Taking resource allocation into account, flexible job shop problem (FJSP) is a class of complex scheduling problem in manufacturing system. In order to utilize the machine resources rationally, multi-objective particle swarm optimization (MOPSO) integrating with variable neighborhood search is introduced to address FJSP efficiently. Firstly, the assignment rules (AL) and dispatching rules (DR) are provided to initialize the population. And then special discrete operators are designed to produce new individuals and earliest completion machine (ECM) is adopted in the disturbance operator to escape the optima. Secondly, personal-best archives (cognitive memories) and global-best archive (social memory), which are updated by the predefined non-dominated archive update strategy, are simultaneously designed to preserve non-dominated individuals and select personal-best positions and the global-best position. Finally, three neighborhoods are provided to search the neighborhoods of global-best archive for enhancing local search ability. The proposed algorithm is evaluated by using Kacem instances and Brdata instances, and a comparison with other approaches shows the effectiveness of the proposed algorithm for FJSP.
Scheduling double round-robin tournaments with divisional play using constraint programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less
Online stochastic optimization of radiotherapy patient scheduling.
Legrain, Antoine; Fortin, Marie-Andrée; Lahrichi, Nadia; Rousseau, Louis-Martin
2015-06-01
The effective management of a cancer treatment facility for radiation therapy depends mainly on optimizing the use of the linear accelerators. In this project, we schedule patients on these machines taking into account their priority for treatment, the maximum waiting time before the first treatment, and the treatment duration. We collaborate with the Centre Intégré de Cancérologie de Laval to determine the best scheduling policy. Furthermore, we integrate the uncertainty related to the arrival of patients at the center. We develop a hybrid method combining stochastic optimization and online optimization to better meet the needs of central planning. We use information on the future arrivals of patients to provide an accurate picture of the expected utilization of resources. Results based on real data show that our method outperforms the policies typically used in treatment centers.
48 CFR 252.211-7003 - Item unique identification and valuation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... reader or interrogator, used to retrieve data encoded on machine-readable media. Concatenated unique item... identifier. Item means a single hardware article or a single unit formed by a grouping of subassemblies... manufactured under identical conditions. Machine-readable means an automatic identification technology media...
NASA Technical Reports Server (NTRS)
Shearrow, Charles A.
1999-01-01
One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.
NASA Astrophysics Data System (ADS)
Nondahl, T. A.; Richter, E.
1980-09-01
A design study of two types of single sided (with a passive rail) linear electric machine designs, namely homopolar linear synchronous machines (LSM's) and linear induction machines (LIM's), is described. It is assumed the machines provide tractive effort for several types of light rail vehicles and locomotives. These vehicles are wheel supported and require tractive powers ranging from 200 kW to 3735 kW and top speeds ranging from 112 km/hr to 400 km/hr. All designs are made according to specified magnetic and thermal criteria. The LSM advantages are a higher power factor, much greater restoring forces for track misalignments, and less track heating. The LIM advantages are no need to synchronize the excitation frequency precisely to vehicle speed, simpler machine construction, and a more easily anchored track structure. The relative weights of the two machine types vary with excitation frequency and speed; low frequencies and low speeds favor the LSM.
One for All: Maintaining a Single Schedule Database for Large Development Projects
NASA Technical Reports Server (NTRS)
Hilscher, R.; Howerton, G.
1999-01-01
Efficiently maintaining and controlling a single schedule database in an Integrated Product Team environment is a significant challenge. It's accomplished effectively with the right combination of tools, skills, strategy, creativity, and teamwork. We'll share our lessons learned maintaining a 20,000 plus task network on a 36 month project.
Simulation model of a single-stage lithium bromide-water absorption cooling unit
NASA Technical Reports Server (NTRS)
Miao, D.
1978-01-01
A computer model of a LiBr-H2O single-stage absorption machine was developed. The model, utilizing a given set of design data such as water-flow rates and inlet or outlet temperatures of these flow rates but without knowing the interior characteristics of the machine (heat transfer rates and surface areas), can be used to predict or simulate off-design performance. Results from 130 off-design cases for a given commercial machine agree with the published data within 2 percent.
Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques
ERIC Educational Resources Information Center
Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili
2009-01-01
In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…
Machined electrostatic sector for mass spectrometer
NASA Technical Reports Server (NTRS)
Sinha, Mahadeva P. (Inventor)
2001-01-01
An electrostatic sector device for a mass spectrometer is formed from a single piece of machinable ceramic. The machined ceramic is coated with a nickel coating, and a notch is etched in the nickel coating to form two separated portions. The sector can be covered by a cover formed from a separate piece of machined ceramic.
The Single Needle Lockstitch Machine. [Constructing Darts.] Module 3.
ERIC Educational Resources Information Center
South Carolina State Dept. of Education, Columbia. Office of Vocational Education.
This module on constructing darts, one in a series on the single needle lockstitch sewing machine for student self-study, contains two sections. Each section includes the following parts: an introduction, directions, an objective, learning activities, student information, student self-check, check-out activities, and an instructor's final…
The Single Needle Lockstitch Machine. [Setting Zippers.] Module 8.
ERIC Educational Resources Information Center
South Carolina State Dept. of Education, Columbia. Office of Vocational Education.
This module on setting zippers, one in a series on the single needle lockstitch sewing machine for student self-study, contains five sections. Each section includes the following parts: an introduction, directions, an objective, learning activities, student information, student self-check, check-out activities, and an instructor's final checklist.…
Introduction of Virtualization Technology to Multi-Process Model Checking
NASA Technical Reports Server (NTRS)
Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu
2009-01-01
Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.
U.S. announces one-year delay for visa waiver program change
NASA Astrophysics Data System (ADS)
The U.S. State Department has announced that it is delaying by one year a new rule affecting citizens from visa waiver program countries. The new rule, which was scheduled to go into effect on 1 October 2003, requires visitors from these countries to obtain non-immigrant visas to enter the United States if they do not have machine-readable passports. The announced delay means that this rule will now go into effect 26 October 2004 instead.The delay does not apply to five visa waiver countries—Andorra, Brunei, Liechtenstein, Luxembourg, and Slovenia—because most of the citizens of these nations already carry passports that are machine-readable.
Machine intelligence and autonomy for aerospace systems
NASA Technical Reports Server (NTRS)
Heer, Ewald (Editor); Lum, Henry (Editor)
1988-01-01
The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
Large Scale Analysis of Geospatial Data with Dask and XArray
NASA Astrophysics Data System (ADS)
Zender, C. S.; Hamman, J.; Abernathey, R.; Evans, K. J.; Rocklin, M.; Zender, C. S.; Rocklin, M.
2017-12-01
The analysis of geospatial data with high level languages has acceleratedinnovation and the impact of existing data resources. However, as datasetsgrow beyond single-machine memory, data structures within these high levellanguages can become a bottleneck. New libraries like Dask and XArray resolve some of these scalability issues,providing interactive workflows that are both familiar tohigh-level-language researchers while also scaling out to much largerdatasets. This broadens the access of researchers to larger datasets on highperformance computers and, through interactive development, reducestime-to-insight when compared to traditional parallel programming techniques(MPI). This talk describes Dask, a distributed dynamic task scheduler, Dask.array, amulti-dimensional array that copies the popular NumPy interface, and XArray,a library that wraps NumPy/Dask.array with labeled and indexes axes,implementing the CF conventions. We discuss both the basic design of theselibraries and how they change interactive analysis of geospatial data, and alsorecent benefits and challenges of distributed computing on clusters ofmachines.
DOPAMINE POSTSYNAPTIC RECEPTOR EFFECTS OF RESTRICTED SCHEDULES OF ELECTROCONVULSIVE SHOCK
Andrade, Chittaranjan; Gangadhar, B.N.; Meena, M.; Pradhan, N.
1990-01-01
SUMMMARY Little work is available on the acute and time-dependant dopaminergic effects of single electroconvulsive shock (ECS) and multiple ECS despite the posited clinical utility of such schedules of electroconvulsive therapy (ECT) administration and the posited role of dopaminergic mechanisms in iieuropsychiatric disorders. In this study, using the apomorphine-induced motility-alteration behavioural paradigm, single session multiple ECS was found to produce no significant effect while single ECS behaviourally downregulated dopamine postsynaptic receptor functioning one week after the ECS, which effect was also seen (albeit to a lesser extent) a further week later. These findings indicate a possible application of restricted schedules of ECT to dopamine postsynaptic receptor supersensitivity syndromes. Lines for future research are suggested. PMID:21927479
Automation and robotics technology for intelligent mining systems
NASA Technical Reports Server (NTRS)
Welsh, Jeffrey H.
1989-01-01
The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.
77 FR 38777 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-29
... Systems (SINCGARS), 200 M2 Chrysler Mount Machine Guns, 400 7.62MM M240 Machine Guns, 12,049,842... Exportable Single Channel Ground and Airborne Radio Systems (SINCGARS), 200 M2 Chrysler Mount Machine Guns, and 400 7.62MM M240 Machine Guns. The possible sale also includes 12,049,842 Ammunition Rounds...
ERIC Educational Resources Information Center
International Business Machines Corp., Milford, CT. Academic Information Systems.
This agenda lists activities scheduled for the second IBM (International Business Machines) Academic Information Systems University AEP (Advanced Education Projects) Conference, which was designed to afford the universities participating in the IBM-sponsored AEPs an opportunity to demonstrate their AEP experiments in educational computing. In…
Code of Federal Regulations, 2010 CFR
2010-07-01
... MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND PROGRAM 26.5-GSA Procurement... contracts will not adequately serve the end-use purpose. GSA will notify the requesting agency in writing of... in which the requisitioner is located. GSA will either arrange for procurement of the items or...
Design Tools for Evaluating Multiprocessor Programs
1976-07-01
than large uniprocessing machines, and 2. economies of scale in manufacturing. Perhaps the most compelling reason (possibly a consequence of the...speed, redundancy, (inefficiency, resource utilization, and economies of the components. [Browne 73, Lehman 66] 6. How can the system be scheduled...mejsures are interesting about the computation? Somn may be: speed, redundancy, (inefficiency, resource utilization, and economies of the components
Extended working hours: Impacts on workers
D. Mitchell; T. Gallagher
2010-01-01
Some logging business owners are trying to manage their equipment assets by increasing the scheduled machine hours. The intent is to maximize the total tons produced by a set of equipment. This practice is referred to as multi-shifting, double-shifting, or extended working hours. One area often overlooked is the impact that working non-traditional hours can have on...
Code of Federal Regulations, 2010 CFR
2010-10-01
... condition that cannot be repaired immediately shall be tagged and dated in a manner prescribed by the... missing horn for a period not exceeding seven calendar days; (3) A fire extinguisher readily available for use may temporarily replace a missing, defective or discharged fire extinguisher on a new on-track...
Code of Federal Regulations, 2014 CFR
2014-10-01
... condition that cannot be repaired immediately shall be tagged and dated in a manner prescribed by the... missing horn for a period not exceeding seven calendar days; (3) A fire extinguisher readily available for use may temporarily replace a missing, defective or discharged fire extinguisher on a new on-track...
Code of Federal Regulations, 2012 CFR
2012-10-01
... condition that cannot be repaired immediately shall be tagged and dated in a manner prescribed by the... missing horn for a period not exceeding seven calendar days; (3) A fire extinguisher readily available for use may temporarily replace a missing, defective or discharged fire extinguisher on a new on-track...
Code of Federal Regulations, 2011 CFR
2011-10-01
... condition that cannot be repaired immediately shall be tagged and dated in a manner prescribed by the... missing horn for a period not exceeding seven calendar days; (3) A fire extinguisher readily available for use may temporarily replace a missing, defective or discharged fire extinguisher on a new on-track...
Code of Federal Regulations, 2013 CFR
2013-10-01
... condition that cannot be repaired immediately shall be tagged and dated in a manner prescribed by the... missing horn for a period not exceeding seven calendar days; (3) A fire extinguisher readily available for use may temporarily replace a missing, defective or discharged fire extinguisher on a new on-track...
A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times
NASA Astrophysics Data System (ADS)
Li, Xin; Fung, Richard Y. K.
2018-02-01
This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This appendix summarizes building characteristics used to determine heating and cooling loads for each of the five building types in each of the four regions. For the selected five buildings, the following data are attached: new and existing construction characteristics; new and existing construction thermal resistance; floor plan and elevation; people load schedule; lighting load schedule; appliance load schedule; ventilation schedule; and hot water use schedule. For the five building types (single family, apartment buildings, commercial buildings, office buildings, and schools), data are compiled in 10 appendices. These are Building Characteristics; Alternate Energy Sources and Energy Conservation Techniques Description, Costs,more » Fuel Price Scenarios; Life Cycle Cost Model; Simulation Models; Solar Heating/Cooling System; Condensed Weather; Single and Multi-Family Dwelling Characteristics and Energy Conservation Techniques; Mixed Strategies for Energy Conservation and Alternative Energy Utilization in Buildings. An extensive bibliography is given in the final appendix. (MCW)« less
Cannizzaro, Gioacchino; Felice, Pietro; Loi, Ignazio; Viola, Paolo; Ferri, Vittorio; Leone, Michele; Lazzarini, Matteo; Trullenque-Eriksson, Anna; Esposito, Marco
To compare the outcome of immediately loaded single implants with a machined or a roughened surface. Fifty patients had two implant sites randomly allocated to receive flaplessplaced single Syra implants (Sweden & Martina), one with a machined and one with a roughened surface (sand-blasted with zirconia powder and acid etched), according to a split-mouth design. To be loaded immediately, implants had to be inserted with a torque superior to 50 Ncm. Implants were restored with definitive crowns in direct occlusal contact within 48 h. Patients were followed for 6 months after loading. Outcome measures were prosthetic and implant failures and complications. Two machined implants and four roughened implants were not loaded immediately. Six months after loading no dropout occurred. One implant loaded late, which had a rough implant surface, failed 20 days after loading (P (McNemar test) = 0.625; difference in proportions = -0.04; 95% CI: -0.15 to 0.07). Three crowns had to be remade on machined implants and four on roughened implants (P (McNemar test) = 1.000; difference in proportions = -0.02; 95% CI: -0.12 to 0.08). Three machined and five roughened implants experienced complications (P (McNemar test) = 0.625; difference in proportions = -0.04; 95% CI: -0.15 to 0.07). There were no statistically significant differences between groups for crown and implant losses as well as complications. Up to 6 months after loading both machined and roughened flapless-placed and immediately loaded single implants provided good and similar results, however, longer follow-ups are needed to evaluate the long-term prognosis of implants with different surfaces.
ERIC Educational Resources Information Center
Ulke-Kurkcuoglu, Burcu; Bozkurt, Funda; Cuhadar, Selmin
2015-01-01
This study aims to investigate the effectiveness of the instruction process provided through computer-assisted activity schedules in the instruction of on-schedule and role-play skills to children with autism spectrum disorder. Herein, a multiple probe design with probe conditions across participants among single subject designs was used. Four…
NASA Astrophysics Data System (ADS)
Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju
2014-04-01
Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deyoung, Anemarie; Smith, John R.
2012-05-03
A moratorium was placed on U.S. underground nuclear testing in 1992. In response, the Stockpile Stewardship Program was created to maintain readiness of the existing nuclear inventory through several efforts such as computer modeling, material analysis, and subcritical nuclear experiments (SCEs). As in the underground test era, the Nevada National Security Site (NNSS), formerly the Nevada Test Site, provides a safe and secure environment for SCEs by the nature of its isolated and secure facilities. A major tool for SCE diagnosis installed in the 05 drift laboratory is a high energy x-ray source used for time resolved imaging. This toolmore » consists of two identical sources (Cygnus 1 and Cygnus 2) and is called the Cygnus Dual Beam Radiographic Facility (Figs. 2-6). Each Cygnus machine has 5 major elements: Marx Generator, Pulse Forming Line (PFL), Coaxial Transmission Line (CTL), 3-cell Inductive Voltage Adder (IVA), and Rod Pinch Diode. Each machine is independently triggered and may be fired in separate tests (staggered mode), or in a single test where there is submicrosecond separation between the pulses (dual mode). Cygnus must operate as a single shot machine since on each pulse the diode electrodes are destroyed. The diode is vented to atmosphere, cleaned, and new electrodes are inserted for each shot. There is normally two shots per day on each machine. Since its installation in 2003, Cygnus has participated in: 4 Subcritical Experiments (Armando, Bacchus, Barolo A, and Barolo B), a 12 shot plutonium physics series (Thermos), and 2 plutonium step wedge calibration series (2005, 2011), resulting in well over 1000 shots. Currently the Facility is in preparation for 2 SCEs scheduled for this calendar year - Castor and Pollux. Cygnus has performed well during 8 years of operations at NNSS. Many improvements in operations and performance have been implemented during this time. Throughout its service at U1a, major maintenance and replacement of many hardware items were delayed due to programmatic requirements. It is anticipated that Cygnus will be in service at U1a for another 5 years. With this assumption, it was realized that significant resources and effort should be allotted to bring the hardware back to its original condition, or even to improve elements when appropriate. The Cygnus Refurbishment and Enhancement Project started in April, 2011 with the intent to encompass a major overhaul of Cygnus.« less
Monitoring Hitting Load in Tennis Using Inertial Sensors and Machine Learning.
Whiteside, David; Cant, Olivia; Connolly, Molly; Reid, Machar
2017-10-01
Quantifying external workload is fundamental to training prescription in sport. In tennis, global positioning data are imprecise and fail to capture hitting loads. The current gold standard (manual notation) is time intensive and often not possible given players' heavy travel schedules. To develop an automated stroke-classification system to help quantify hitting load in tennis. Nineteen athletes wore an inertial measurement unit (IMU) on their wrist during 66 video-recorded training sessions. Video footage was manually notated such that known shot type (serve, rally forehand, slice forehand, forehand volley, rally backhand, slice backhand, backhand volley, smash, or false positive) was associated with the corresponding IMU data for 28,582 shots. Six types of machine-learning models were then constructed to classify true shot type from the IMU signals. Across 10-fold cross-validation, a cubic-kernel support vector machine classified binned shots (overhead, forehand, or backhand) with an accuracy of 97.4%. A second cubic-kernel support vector machine achieved 93.2% accuracy when classifying all 9 shot types. With a view to monitoring external load, the combination of miniature inertial sensors and machine learning offers a practical and automated method of quantifying shot counts and discriminating shot types in elite tennis players.
Testing of Anesthesia Machines and Defibrillators in Healthcare Institutions.
Gurbeta, Lejla; Dzemic, Zijad; Bego, Tamer; Sejdic, Ervin; Badnjevic, Almir
2017-09-01
To improve the quality of patient treatment by improving the functionality of medical devices in healthcare institutions. To present the results of the safety and performance inspection of patient-relevant output parameters of anesthesia machines and defibrillators defined by legal metrology. This study covered 130 anesthesia machines and 161 defibrillators used in public and private healthcare institutions, during a period of two years. Testing procedures were carried out according to international standards and legal metrology legislative procedures in Bosnia and Herzegovina. The results show that in 13.84% of tested anesthesia machine and 14.91% of defibrillators device performance is not in accordance with requirements and should either have its results be verified, or the device removed from use or scheduled for corrective maintenance. Research emphasizes importance of independent safety and performance inspections, and gives recommendations for the frequency of inspection based on measurements. Results offer implications for adequacy of preventive and corrective maintenance performed in healthcare institutions. Based on collected data, the first digital electronical database of anesthesia machines and defibrillators used in healthcare institutions in Bosnia and Herzegovina is created. This database is a useful tool for tracking each device's performance over time.
A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia
NASA Astrophysics Data System (ADS)
Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.
2017-08-01
In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.
Investigation of automated task learning, decomposition and scheduling
NASA Technical Reports Server (NTRS)
Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.
1990-01-01
The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.
The Impact of Single-Gender Scheduling on Students in a Title I School
ERIC Educational Resources Information Center
Moss, Janet L.
2011-01-01
This dissertation was designed to examine the impact that single-gender scheduling would have on students who attend a struggling Title I middle school. The importance of the middle level cannot be denied. Strong research points to this time in a student's life as the pivotal crux on which success and failure are balanced. Middle level educators…
Short-term hydro generation and interchange contract scheduling for Swiss Rail
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christoforidis, M.; Awobamise, B.; Tong, S.
This paper describes the Short-Term Resource Scheduling (STRS) function that has been developed by Siemens-Empros as part of the new SBB/Direktion Kraftwerk (Swiss Rail) Energy Management System. Optimal scheduling of the single-phase hydro plants, single-phase and three-phase energy accounts, and purchase and sale of three phase energy subject to a multitude of physical and contractual constraints (including spinning and regulating reserve requirements), is the main objective of the STRS function. The operations planning horizon of STRS is one day to one week using an hourly time increment.
Centralized Routing and Scheduling Using Multi-Channel System Single Transceiver in 802.16d
NASA Astrophysics Data System (ADS)
Al-Hemyari, A.; Noordin, N. K.; Ng, Chee Kyun; Ismail, A.; Khatun, S.
This paper proposes a cross-layer optimized strategy that reduces the effect of interferences from neighboring nodes within a mesh networks. This cross-layer design relies on the routing information in network layer and the scheduling table in medium access control (MAC) layer. A proposed routing algorithm in network layer is exploited to find the best route for all subscriber stations (SS). Also, a proposed centralized scheduling algorithm in MAC layer is exploited to assign a time slot for each possible node transmission. The cross-layer optimized strategy is using multi-channel single transceiver and single channel single transceiver systems for WiMAX mesh networks (WMNs). Each node in WMN has a transceiver that can be tuned to any available channel for eliminating the secondary interference. Among the considered parameters in the performance analysis are interference from the neighboring nodes, hop count to the base station (BS), number of children per node, slot reuse, load balancing, quality of services (QoS), and node identifier (ID). Results show that the proposed algorithms significantly improve the system performance in terms of length of scheduling, channel utilization ratio (CUR), system throughput, and average end to end transmission delay.
Currency crisis indication by using ensembles of support vector machine classifiers
NASA Astrophysics Data System (ADS)
Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee
2014-07-01
There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.
Plan for conducting an international machine tool task force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutton, G.P.; McClure, E.R.; Schuman, J.F.
1978-08-28
The basic objectives of the Machine Tool Task Force (MTTF) are to characterize and summarize the state of the art of cutting machine tool technology and to identify promising areas of future R and D. These goals will be accomplished with a series of multidisciplinary teams of prominent experts and individuals experienced in the specialized technologies of machine tools or in the management of machine tool operations. Experts will be drawn from all areas of the machine tool community: machine tool users or buyer organizations, builders, and R and D establishments including universities and government laboratories, both domestic and foreign.more » A plan for accomplishing this task is presented. The area of machine tool technology has been divided into about two dozen technology subjects on which teams of one or more experts will work. These teams are, in turn, organized into four principal working groups dealing, respectively, with machine tool accuracy, mechanics, control, and management systems/utilization. Details are presented on specific subjects to be covered, the organization of the Task Force and its four working groups, and the basic approach to determining the state of the art of technology and the future directions of this technology. The planned review procedure, the potential benefits, our management approach, and the schedule, as well as the key participating personnel and their background are discussed. The initial meeting of MTTF members will be held at a plenary session on October 16 and 17, 1978, in Scottsdale, AZ. The MTTF study will culminate in a conference on September 1, 1980, in Chicago, IL, immediately preceeding the 1980 International Machine Tool Show. At this time, our results will be released to the public; a series of reports will be published in late 1980.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukami, Tadashi; Imamura, Michinori; Kaburaki, Yuichi
1995-12-31
A new single-phase capacitor self-excited induction generator with self-regulating feature is presented. The new generator consists of a squirrel cage three-phase induction machine and three capacitors connected in series and parallel with a single phase load. The voltage regulation of this generator is very small due to the effect of the three capacitors. Moreover, since a Y-connected stator winding is employed, the waveform of the output voltage becomes sinusoidal. In this paper the system configuration and the operating principle of the new generator are explained, and the basic characteristics are also investigated by means of a simple analysis and experimentsmore » with a laboratory machine.« less
Single instruction computer architecture and its application in image processing
NASA Astrophysics Data System (ADS)
Laplante, Phillip A.
1992-03-01
A single processing computer system using only half-adder circuits is described. In addition, it is shown that only a single hard-wired instruction is needed in the control unit to obtain a complete instruction set for this general purpose computer. Such a system has several advantages. First it is intrinsically a RISC machine--in fact the 'ultimate RISC' machine. Second, because only a single type of logic element is employed the entire computer system can be easily realized on a single, highly integrated chip. Finally, due to the homogeneous nature of the computer's logic elements, the computer has possible implementations as an optical or chemical machine. This in turn suggests possible paradigms for neural computing and artificial intelligence. After showing how we can implement a full-adder, min, max and other operations using the half-adder, we use an array of such full-adders to implement the dilation operation for two black and white images. Next we implement the erosion operation of two black and white images using a relative complement function and the properties of erosion and dilation. This approach was inspired by papers by van der Poel in which a single instruction is used to furnish a complete set of general purpose instructions and by Bohm- Jacopini where it is shown that any problem can be solved using a Turing machine with one entry and one exit.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure
NASA Astrophysics Data System (ADS)
Matyska, Ludek; Ruda, Miroslav; Toth, Simon
For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.
Single molecule thermodynamics in biological motors.
Taniguchi, Yuichi; Karagiannis, Peter; Nishiyama, Masayoshi; Ishii, Yoshiharu; Yanagida, Toshio
2007-04-01
Biological molecular machines use thermal activation energy to carry out various functions. The process of thermal activation has the stochastic nature of output events that can be described according to the laws of thermodynamics. Recently developed single molecule detection techniques have allowed each distinct enzymatic event of single biological machines to be characterized providing clues to the underlying thermodynamics. In this study, the thermodynamic properties in the stepping movement of a biological molecular motor have been examined. A single molecule detection technique was used to measure the stepping movements at various loads and temperatures and a range of thermodynamic parameters associated with the production of each forward and backward step including free energy, enthalpy, entropy and characteristic distance were obtained. The results show that an asymmetry in entropy is a primary factor that controls the direction in which the motor will step. The investigation on single molecule thermodynamics has the potential to reveal dynamic properties underlying the mechanisms of how biological molecular machines work.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-04-24
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less
Yamaguchi, Akemi; Matsuda, Kazuyuki; Sueki, Akane; Taira, Chiaki; Uehara, Masayuki; Saito, Yasunori; Honda, Takayuki
2015-08-25
Reverse transcription (RT)-nested polymerase chain reaction (PCR) is a time-consuming procedure because it has several handling steps and is associated with the risk of cross-contamination during each step. Therefore, a rapid and sensitive one-step RT-nested PCR was developed that could be performed in a single tube using a droplet-PCR machine. The K562 BCR-ABL mRNA-positive cell line as well as bone marrow aspirates from 5 patients with chronic myelogenous leukemia (CML) and 5 controls without CML were used. We evaluated one-step RT-nested PCR using the droplet-PCR machine. One-step RT-nested PCR performed in a single tube using the droplet-PCR machine enabled the detection of BCR-ABL mRNA within 40min, which was 10(3)-fold superior to conventional RT nested PCR using three steps in separate tubes. The sensitivity of the one-step RT-nested PCR was 0.001%, with sample reactivity comparable to that of the conventional assay. One-step RT-nested PCR was developed using the droplet-PCR machine, which enabled all reactions to be performed in a single tube accurately and rapidly and with high sensitivity. This one-step RT-nested PCR may be applicable to a wide spectrum of genetic tests in clinical laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.
1981-10-01
unique alphanumeric designation assigned by the performing orga- nization or provided by the sponsoring organization in accordance with American...for cataloging. (b). Identifiers and Open-Ended Terms. Use identifiers for project names, code names, equipment designators , etc. Use open- ended...spool. Note. These components ae designed to function together or with the BASS alone, if internal control of job processing is not a requirement at a
78 FR 55219 - Safety Zone; Flying Machine Competition, Chicago, IL
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-10
... event has been scheduled by a commercial entity to take place from 11 a.m. until 4 p.m. on September 21... adversely alter the budget of any grant or loan recipients, and will not raise any novel legal or policy...-scene representative. 2. Impact on Small Entities The Regulatory Flexibility Act of 1980 (RFA), 5 U.S.C...
Reactor operations informal monthly report, May 1, 1995--May 31, 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hauptman, H.M.; Petro, J.N.; Jacobi, O.
1995-05-01
This document is an informal progress report for the operational performance of the Brookhaven Medical Research Reactor, and the Brookhaven High Flux Beam Reactor, for the month of May, 1995. Both machines ran well during this period, with no reportable instrumentation problems, all scheduled maintenance performed, and only one reportable occurance, involving a particle on Vest Button, Personnel Radioactive Contamination.
Evaluating Data Clustering Approach for Life-Cycle Facility Control
2013-04-01
produce 90% matching accuracy with noise/variations up to 55%. KEYWORDS: Building Information Modelling ( BIM ), machine learning, pattern detection...reconciled to building information model elements and ultimately to an expected resource utilization schedule. The motivation for this integration is to...by interoperable data sources and building information models . Building performance modelling and simulation efforts such as those by Maile et al
36 CFR 1225.24 - When can an agency apply previously approved schedules to electronic records?
Code of Federal Regulations, 2010 CFR
2010-07-01
... may apply a previously approved schedule for hard copy records to electronic versions of the permanent records when the electronic records system replaces a single series of hard copy permanent records or the... have been previously scheduled as permanent in hard copy form, including special media records as...
Family Roles as Moderators of the Relationship between Schedule Flexibility and Stress
ERIC Educational Resources Information Center
Jang, Soo Jung; Zippay, Allison; Park, Rhokeun
2012-01-01
Employer initiatives that address the spillover of work strain onto family life include flexible work schedules. This study explored the mediating role of negative work-family spillover in the relationship between schedule flexibility and employee stress and the moderating roles of gender, family workload, and single-parent status. Data were drawn…
Optimal load scheduling in commercial and residential microgrids
NASA Astrophysics Data System (ADS)
Ganji Tanha, Mohammad Mahdi
Residential and commercial electricity customers use more than two third of the total energy consumed in the United States, representing a significant resource of demand response. Price-based demand response, which is in response to changes in electricity prices, represents the adjustments in load through optimal load scheduling (OLS). In this study, an efficient model for OLS is developed for residential and commercial microgrids which include aggregated loads in single-units and communal loads. Single unit loads which include fixed, adjustable and shiftable loads are controllable by the unit occupants. Communal loads which include pool pumps, elevators and central heating/cooling systems are shared among the units. In order to optimally schedule residential and commercial loads, a community-based optimal load scheduling (CBOLS) is proposed in this thesis. The CBOLS schedule considers hourly market prices, occupants' comfort level, and microgrid operation constraints. The CBOLS' objective in residential and commercial microgrids is the constrained minimization of the total cost of supplying the aggregator load, defined as the microgrid load minus the microgrid generation. This problem is represented by a large-scale mixed-integer optimization for supplying single-unit and communal loads. The Lagrangian relaxation methodology is used to relax the linking communal load constraint and decompose the independent single-unit functions into subproblems which can be solved in parallel. The optimal solution is acceptable if the aggregator load limit and the duality gap are within the bounds. If any of the proposed criteria is not satisfied, the Lagrangian multiplier will be updated and a new optimal load schedule will be regenerated until both constraints are satisfied. The proposed method is applied to several case studies and the results are presented for the Galvin Center load on the 16th floor of the IIT Tower in Chicago.
Environmental concept for engineering software on MIMD computers
NASA Technical Reports Server (NTRS)
Lopez, L. A.; Valimohamed, K.
1989-01-01
The issues related to developing an environment in which engineering systems can be implemented on MIMD machines are discussed. The problem is presented in terms of implementing the finite element method under such an environment. However, neither the concepts nor the prototype implementation environment are limited to this application. The topics discussed include: the ability to schedule and synchronize tasks efficiently; granularity of tasks; load balancing; and the use of a high level language to specify parallel constructs, manage data, and achieve portability. The objective of developing a virtual machine concept which incorporates solutions to the above issues leads to a design that can be mapped onto loosely coupled, tightly coupled, and hybrid systems.
NASA Astrophysics Data System (ADS)
Ramulu, M.; Rogers, E.
1994-04-01
The predominant machining application with graphite/epoxy composite materials in aerospace industry is peripheral trimming. The computer numerically controlled (CNC) high speed routers required to do edge trimming work are generally scheduled for production work in industry and are not available for extensive cutter testing. Therefore, an experimental method of simulating the conditions of periphery trim using a lathe is developed in this paper. The validity of the test technique will be demonstrated by conducting carbide tool wear tests under dry cutting conditions. The experimental results will be analyzed to characterize the wear behavior of carbide cutting tools in machining the composite materials.
Vesikari, Timo; Hardt, Roland; Rümke, Hans C; Icardi, Giancarlo; Montero, Jordi; Thomas, Stéphane; Sadorge, Christine; Fiquet, Anne
2013-04-01
Disease protection provided by herpes zoster (HZ) vaccination tends to reduce as age increases. This study was designed to ascertain whether a second dose of the HZ vaccine, Zostavax(®), would increase varicella zoster virus (VZV)-specific immune response among individuals aged ≥ 70 y. Individuals aged ≥ 70 y were randomized to receive HZ vaccine in one of three schedules: a single dose (0.65 mL), two doses at 0 and 1 mo, or two doses at 0 and 3 mo. VZV antibody titers were measured at baseline, 4 weeks after each vaccine dose, and 12 mo after the last dose. In total, 759 participants (mean age 76.1 y) were randomized to receive vaccination. Antibody responses were similar after a single dose or two doses of HZ vaccine [post-dose 2/post-dose 1 geometric mean titer (GMT) ratios for the 1-mo or 3-mo schedules were 1.11, 95% confidence interval (CI) 1.02-1.22 and 0.78, 95% CI 0.73-0.85], respectively). The 12-mo post-dose 2/12-mo post-dose 1 GMT ratio was similar for the 1-mo schedule and for the 3-mo schedule (1.06, 95% CI 0.96-1.17 and 1.08, 95% CI 0.98-1.19, respectively). Similar immune responses were observed in participants aged 70-79 y and those aged ≥ 80 y. HZ vaccine was generally well tolerated, with no evidence of increased adverse event incidence after the second dose with either schedule. Compared with a single-dose regimen, two-dose vaccination did not increase VZV antibody responses among individuals aged ≥ 70 y. Antibody persistence after 12 mo was similar with all three schedules.
Relative optical navigation around small bodies via Extreme Learning Machine
NASA Astrophysics Data System (ADS)
Law, Andrew M.
To perform close proximity operations under a low-gravity environment, relative and absolute positions are vital information to the maneuver. Hence navigation is inseparably integrated in space travel. Extreme Learning Machine (ELM) is presented as an optical navigation method around small celestial bodies. Optical Navigation uses visual observation instruments such as a camera to acquire useful data and determine spacecraft position. The required input data for operation is merely a single image strip and a nadir image. ELM is a machine learning Single Layer feed-Forward Network (SLFN), a type of neural network (NN). The algorithm is developed on the predicate that input weights and biases can be randomly assigned and does not require back-propagation. The learned model is the output layer weights which are used to calculate a prediction. Together, Extreme Learning Machine Optical Navigation (ELM OpNav) utilizes optical images and ELM algorithm to train the machine to navigate around a target body. In this thesis the asteroid, Vesta, is the designated celestial body. The trained ELMs estimate the position of the spacecraft during operation with a single data set. The results show the approach is promising and potentially suitable for on-board navigation.
Spofford, Christina M; Bayman, Emine O; Szeluga, Debra J; From, Robert P
2012-01-01
Novel methods for teaching are needed to enhance the efficiency of academic anesthesia departments as well as provide approaches to learning that are aligned with current trends and advances in technology. A video was produced that taught the key elements of anesthesia machine checkout and room set up. Novice learners were randomly assigned to receive either the new video format or traditional lecture-based format for this topic during their regularly scheduled lecture series. Primary outcome was the difference in written examination score before and after teaching between the two groups. Secondary outcome was the satisfaction score of the trainees in the two groups. Forty-two students assigned to the video group and 36 students assigned to the lecture group completed the study. Students in each group similar interest in anesthesia, pre-test scores, post-test scores, and final exam scores. The median posttest to pretest difference was greater in the video groups (3.5 (3.0-5.0) vs 2.5 (2.0-3.0), for video and lecture groups respectively, p 0.002). Despite improved test scores, students reported higher satisfaction the traditional, lecture-based format (22.0 (18.0-24.0) vs 24.0 (20.0-28.0), for video and lecture groups respectively, p <0.004). Higher pre-test to post-test improvements were observed among students in the video-based teaching group, however students rated traditional, live lectures higher than newer video-based teaching.
Tool simplifies machining of pipe ends for precision welding
NASA Technical Reports Server (NTRS)
Matus, S. T.
1969-01-01
Single tool prepares a pipe end for precision welding by simultaneously performing internal machining, end facing, and bevel cutting to specification standards. The machining operation requires only one milling adjustment, can be performed quickly, and produces the high quality pipe-end configurations required to ensure precision-welded joints.
Abstract quantum computing machines and quantum computational logics
NASA Astrophysics Data System (ADS)
Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto
2016-06-01
Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.
A Network Flow Approach to the Initial Skills Training Scheduling Problem
2007-12-01
include (but are not limited to) queuing theory, stochastic analysis and simulation. After the demand schedule has been estimated, it can be ...software package has already been purchased and is in use by AFPC, AFPC has requested that the new algorithm be programmed in this language as well ...the discussed outputs from those schedules. Required Inputs A single input file details the students to be scheduled as well as the courses
Design and Development of an Engineering Prototype Compact X-Ray Scanner (FMS 5000)
1989-03-31
machined by "wire-EDM" (electro discharge machining ). Three different slice thicknesses can be selected from the scan menu. The set of slice thicknesses...circuit. This type of circuit is used whenever more than ten kilowatts of power are needed by a machine . For example, lathes and milling machines in a... machine shop usually use this type of input power. A three- phase circuit delivers power more efficiently than a single-phase circuit because three
A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes.
Vogl, Gregory W; Weiss, Brian A; Donmez, M Alkan
2015-01-01
A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a 'sensor box' to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality.
A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes
Vogl, Gregory W.; Weiss, Brian A.; Donmez, M. Alkan
2017-01-01
A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a ‘sensor box’ to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality. PMID:28691039
Scheduling time-critical graphics on multiple processors
NASA Technical Reports Server (NTRS)
Meyer, Tom W.; Hughes, John F.
1995-01-01
This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.
75 FR 52461 - Drawbridge Operation Regulation; Pocomoke River, Snow Hill, MD
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-26
... this single leaf bascule drawbridge, has requested a temporary deviation from the current operating schedule to facilitate cleaning and painting the structure. Under the regular operating schedule, the...
Software platform virtualization in chemistry research and university teaching
2009-01-01
Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997
Software platform virtualization in chemistry research and university teaching.
Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver
2009-11-16
Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.
NASA Astrophysics Data System (ADS)
Wang, Liping; Ji, Yusheng; Liu, Fuqiang
The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.
Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less
Non-traditional Sensor Tasking for SSA: A Case Study
NASA Astrophysics Data System (ADS)
Herz, A.; Herz, E.; Center, K.; Martinez, I.; Favero, N.; Clark, C.; Therien, W.; Jeffries, M.
Industry has recognized that maintaining SSA of the orbital environment going forward is too challenging for the government alone. Consequently there are a significant number of commercial activities in various stages of development standing-up novel sensors and sensor networks to assist in SSA gathering and dissemination. Use of these systems will allow government and military operators to focus on the most sensitive space control issues while allocating routine or lower priority data gathering responsibility to the commercial side. The fact that there will be multiple (perhaps many) commercial sensor capabilities available in this new operational model begets a common access solution. Absent a central access point to assert data needs, optimized use of all commercial sensor resources is not possible and the opportunity for coordinated collections satisfying overarching SSA-elevating objectives is lost. Orbit Logic is maturing its Heimdall Web system - an architecture facilitating “data requestor” perspectives (allowing government operations centers to assert SSA data gathering objectives) and “sensor operator” perspectives (through which multiple sensors of varying phenomenology and capability are integrated via machine -machine interfaces). When requestors submit their needs, Heimdall’s planning engine determines tasking schedules across all sensors, optimizing their use via an SSA-specific figure-of-merit. ExoAnalytic was a key partner in refining the sensor operator interfaces, working with Orbit Logic through specific details of sensor tasking schedule delivery and the return of observation data. Scant preparation on both sides preceded several integration exercises (walk-then-run style), which culminated in successful demonstration of the ability to supply optimized schedules for routine public catalog data collection – then adapt sensor tasking schedules in real-time upon receipt of urgent data collection requests. This paper will provide a narrative of the joint integration process - detailing decision points, compromises, and results obtained on the road toward a set of interoperability standards for commercial sensor accommodation.
Towards a molecular logic machine
NASA Astrophysics Data System (ADS)
Remacle, F.; Levine, R. D.
2001-06-01
Finite state logic machines can be realized by pump-probe spectroscopic experiments on an isolated molecule. The most elaborate setup, a Turing machine, can be programmed to carry out a specific computation. We argue that a molecule can be similarly programmed, and provide examples using two photon spectroscopies. The states of the molecule serve as the possible states of the head of the Turing machine and the physics of the problem determines the possible instructions of the program. The tape is written in an alphabet that allows the listing of the different pump and probe signals that are applied in a given experiment. Different experiments using the same set of molecular levels correspond to different tapes that can be read and processed by the same head and program. The analogy to a Turing machine is not a mechanical one and is not completely molecular because the tape is not part of the molecular machine. We therefore also discuss molecular finite state machines, such as sequential devices, for which the tape is not part of the machine. Nonmolecular tapes allow for quite long input sequences with a rich alphabet (at the level of 7 bits) and laser pulse shaping experiments provide concrete examples. Single molecule spectroscopies show that a single molecule can be repeatedly cycled through a logical operation.
Resistance to Extinction Following Variable-Interval Reinforcement: Reinforcer Rate and Amount
ERIC Educational Resources Information Center
Shull, Richard L.; Grimes, Julie A.
2006-01-01
Rats obtained food-pellet reinforcers by nose poking a lighted key. Experiment 1 examined resistance to extinction following single-schedule training with different variable-interval schedules, ranging from a mean interval of 16 min to 0.25 min. That is, for each schedule, the rats received 20 consecutive daily baseline sessions and then a session…
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Tumeo, Antonino; Ferrandi, Fabrizio
Emerging applications such as data mining, bioinformatics, knowledge discovery, social network analysis are irregular. They use data structures based on pointers or linked lists, such as graphs, unbalanced trees or unstructures grids, which generates unpredictable memory accesses. These data structures usually are large, but difficult to partition. These applications mostly are memory bandwidth bounded and have high synchronization intensity. However, they also have large amounts of inherent dynamic parallelism, because they potentially perform a task for each one of the element they are exploring. Several efforts are looking at accelerating these applications on hybrid architectures, which integrate general purpose processorsmore » with reconfigurable devices. Some solutions, which demonstrated significant speedups, include custom-hand tuned accelerators or even full processor architectures on the reconfigurable logic. In this paper we present an approach for the automatic synthesis of accelerators from C, targeted at irregular applications. In contrast to typical High Level Synthesis paradigms, which construct a centralized Finite State Machine, our approach generates dynamically scheduled hardware components. While parallelism exploitation in typical HLS-generated accelerators is usually bound within a single execution flow, our solution allows concurrently running multiple execution flow, thus also exploiting the coarser grain task parallelism of irregular applications. Our approach supports multiple, multi-ported and distributed memories, and atomic memory operations. Its main objective is parallelizing as many memory operations as possible, independently from their execution time, to maximize the memory bandwidth utilization. This significantly differs from current HLS flows, which usually consider a single memory port and require precise scheduling of memory operations. A key innovation of our approach is the generation of a memory interface controller, which dynamically maps concurrent memory accesses to multiple ports. We present a case study on a typical irregular kernel, Graph Breadth First search (BFS), exploring different tradeoffs in terms of parallelism and number of memories.« less
Machine-Aided Indexing of Technical Literature
ERIC Educational Resources Information Center
Klingbiel, Paul H.
1973-01-01
To index at the Defense Documentation Center (DDC), an automated system must choose single words or phrases rapidly and economically. Automation of DDC's indexing has been machine-aided from its inception. A machine-aided indexing system is described that indexes one million words of text per hour of CPU time. (22 references) (Author/SJ)
THE ROLE OF REVIEW MATERIAL IN CONTINUOUS PROGRAMMING WITH TEACHING MACHINES.
ERIC Educational Resources Information Center
FERSTER, C.B.
STUDENTS WERE PRESENTED 61 LESSONS BY MEANS OF SEMIAUTOMATIC TEACHING MACHINES. LESSONS WERE ARRANGED SO THAT EACH PARTICIPATING STUDENT STUDIED PART OF THE COURSE MATERIAL WITH A SINGLE REPETITION AND PART WITHOUT REPETITION. DATA WERE OBTAINED FROM TWO TESTS SHOWING TEACHING-MACHINE RESULTS AND ONE FINAL COURSE EXAMINATION. NO SIGNIFICANT…
Impact of the HEALTHY study on vending machine offerings in middle schools
USDA-ARS?s Scientific Manuscript database
The purpose of this study is to report the impact of the three-year middle school-based HEALTHY study on intervention school vending machine offerings. There were two goals for the vending machines: serve only dessert/snack foods with 200 kilocalories or less per single serving package, and eliminat...
The 1991 Goddard Conference on Space Applications of Artificial Intelligence
NASA Technical Reports Server (NTRS)
Rash, James L. (Editor)
1991-01-01
The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in this proceeding fall into the following areas: Planning and scheduling, fault monitoring/diagnosis/recovery, machine vision, robotics, system development, information management, knowledge acquisition and representation, distributed systems, tools, neural networks, and miscellaneous applications.
2015-02-10
ISS042E236075 (02/10/2015) --- Astronauts in space must exercise regularly to keep muscles from deteriorating. The busy schedule aboard the International Space Station has these regular periods worked in as NASA astronaut Terry Virts shows in this Tweet he sent out on Feb. 10, 2015 with the comment: "Periodic Fitness Evaluation- riding the bike with a heart rate monitor, EKG, and blood pressure machine hooked up".
TDRSS operations control analysis study
NASA Technical Reports Server (NTRS)
1976-01-01
The use of an operational Tracking and Data Relay Satellite System (TDRSS) and the remaining ground stations for the STDN (GSTDN) was investigated. The operational aspects of TDRSS concepts, GSTDN as a 14-site network, and GSTDN as a 7 site-network were compared and operations control concepts for the configurations developed. Man/machine interface, scheduling system, and hardware/software tradeoff analyses were among the factors considered in the analysis.
NASA Astrophysics Data System (ADS)
Tang, Dunbing; Dai, Min
2015-09-01
The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.
Multi-core processing and scheduling performance in CMS
NASA Astrophysics Data System (ADS)
Hernández, J. M.; Evans, D.; Foulkes, S.
2012-12-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.
Aerial Refueling Process Rescheduling Under Job Related Disruptions
NASA Technical Reports Server (NTRS)
Kaplan, Sezgin; Rabadi, Ghaith
2011-01-01
The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on the multiple tankers (machines) to minimize the total weighted tardiness. ARSP assumes that the jobs have different release times and due dates. The ARSP is dynamic environment and unexpected events may occur. In this paper, rescheduling in the aerial refueling process with a time set of jobs will be studied to deal with job related disruptions such as the arrival of new jobs, the departure of an existing job, high deviations in the release times and changes in job priorities. In order to keep the stability and to avoid excessive computation, partial schedule repair algorithm is developed and its preliminary results are presented.
Roetker, Nicholas S; Page, C David; Yonker, James A; Chang, Vicky; Roan, Carol L; Herd, Pamela; Hauser, Taissa S; Hauser, Robert M; Atwood, Craig S
2013-10-01
We examined depression within a multidimensional framework consisting of genetic, environmental, and sociobehavioral factors and, using machine learning algorithms, explored interactions among these factors that might better explain the etiology of depressive symptoms. We measured current depressive symptoms using the Center for Epidemiologic Studies Depression Scale (n = 6378 participants in the Wisconsin Longitudinal Study). Genetic factors were 78 single nucleotide polymorphisms (SNPs); environmental factors-13 stressful life events (SLEs), plus a composite proportion of SLEs index; and sociobehavioral factors-18 personality, intelligence, and other health or behavioral measures. We performed traditional SNP associations via logistic regression likelihood ratio testing and explored interactions with support vector machines and Bayesian networks. After correction for multiple testing, we found no significant single genotypic associations with depressive symptoms. Machine learning algorithms showed no evidence of interactions. Naïve Bayes produced the best models in both subsets and included only environmental and sociobehavioral factors. We found no single or interactive associations with genetic factors and depressive symptoms. Various environmental and sociobehavioral factors were more predictive of depressive symptoms, yet their impacts were independent of one another. A genome-wide analysis of genetic alterations using machine learning methodologies will provide a framework for identifying genetic-environmental-sociobehavioral interactions in depressive symptoms.
Performance of a plasma fluid code on the Intel parallel computers
NASA Technical Reports Server (NTRS)
Lynch, V. E.; Carreras, B. A.; Drake, J. B.; Leboeuf, J. N.; Liewer, P.
1992-01-01
One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A parallel algorithm for plasma turbulence calculations was tested on the Intel iPSC/860 hypercube and the Touchtone Delta machine. Using the 128 processors of the Intel iPSC/860 hypercube, a factor of 5 improvement over a single-processor CRAY-2 is obtained. For the Touchtone Delta machine, the corresponding improvement factor is 16. For plasma edge turbulence calculations, an extrapolation of the present results to the Intel (sigma) machine gives an improvement factor close to 64 over the single-processor CRAY-2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.
NASA Astrophysics Data System (ADS)
Stas, Michiel; Dong, Qinghan; Heremans, Stien; Zhang, Beier; Van Orshoven, Jos
2016-08-01
This paper compares two machine learning techniques to predict regional winter wheat yields. The models, based on Boosted Regression Trees (BRT) and Support Vector Machines (SVM), are constructed of Normalized Difference Vegetation Indices (NDVI) derived from low resolution SPOT VEGETATION satellite imagery. Three types of NDVI-related predictors were used: Single NDVI, Incremental NDVI and Targeted NDVI. BRT and SVM were first used to select features with high relevance for predicting the yield. Although the exact selections differed between the prefectures, certain periods with high influence scores for multiple prefectures could be identified. The same period of high influence stretching from March to June was detected by both machine learning methods. After feature selection, BRT and SVM models were applied to the subset of selected features for actual yield forecasting. Whereas both machine learning methods returned very low prediction errors, BRT seems to slightly but consistently outperform SVM.
Using container orchestration to improve service management at the RAL Tier-1
NASA Astrophysics Data System (ADS)
Lahiff, Andrew; Collier, Ian
2017-10-01
In recent years container orchestration has been emerging as a means of gaining many potential benefits compared to a traditional static infrastructure, such as increased utilisation through multi-tenancy, improved availability due to self-healing, and the ability to handle changing loads due to elasticity and auto-scaling. To this end we have been investigating migrating services at the RAL Tier-1 to an Apache Mesos cluster. In this model the concept of individual machines is abstracted away and services are run in containers on a cluster of machines, managed by schedulers, enabling a high degree of automation. Here we describe Mesos, the infrastructure deployed at RAL, and describe in detail the explicit example of running a batch farm on Mesos.
U.S. Visa Waiver Program Changes
NASA Astrophysics Data System (ADS)
The U.S. State Department has just announced that a change to a new rule affecting citizens from visa waiver program countries. The rule, scheduled to go into effect on 1 October 2003, requires visitors from these countries to obtain non-immigrant visas to enter the United States if they do not have machine-readable passports. The change announced is that a visa waiver country can petition the U.S. government to delay the rule by one year. The State Department recommends that citizens of visa waiver program countries who are contemplating visiting the United States, and do not have machine-readable passports, contact the nearest U.S. embassy or consulate to find out if implementation of the rule has been temporarily waived for their countries.
Test - Apollo-Soyuz Test Project (ASTP)
1974-07-01
S74-24671 (10 July 1974) --- Three Apollo-Soyuz Test Project (ASTP) engineers look over a Soyuz spacecraft docking system prior to an ASTP docking mechanism fitness test conducted in Building 13 at the Johnson Space Center (JSC). They are (left to right) Robert White, Vladimir Syromyatnikov and Yevgeniy Bobrov. White is the American chairman of ASTP Working Group Number 3, and Syromyatnikov is his Soviet counterpart. This working group is concerned with ASTP docking problems and procedures. White is with JSC's Spacecraft Design Division. Syromyatnikov is senior researcher of the Soviet State Research Institute of Machine Building. Bobrov is a junior researcher with the Institute of Machine Building. The joint United States - USSR ASTP docking mission in Earth orbit is scheduled for the summer of 1975.
Code of Federal Regulations, 2011 CFR
2011-07-01
... performance test of one representative magnet wire coating machine for each group of identical or very similar... you complete the performance test of a representative magnet wire coating machine. The requirements in... operations, you may, with approval, conduct a performance test of a single magnet wire coating machine that...
Impact of the HEALTHY Study on Vending Machine Offerings in Middle Schools
ERIC Educational Resources Information Center
Hartstein, Jill; Cullen, Karen W.; Virus, Amy; El Ghormli, Laure; Volpe, Stella L.; Staten, Myrlene A.; Bridgman, Jessica C.; Stadler, Diane D.; Gillis, Bonnie; McCormick, Sarah B.; Mobley, Connie C.
2011-01-01
Purpose/Objectives: The purpose of this study is to report the impact of the three-year middle school-based HEALTHY study on intervention school vending machine offerings. There were two goals for the vending machines: serve only dessert/snack foods with 200 kilocalories or less per single serving package, and eliminate 100% fruit juice and…
Machine Tests Optical Fibers In Flexure
NASA Technical Reports Server (NTRS)
Darejeh, Hadi; Thomas, Henry; Delcher, Ray
1993-01-01
Machine repeatedly flexes single optical fiber or cable or bundle of optical fibers at low temperature. Liquid nitrogen surrounds specimen as it is bent back and forth by motion of piston. Machine inexpensive to build and operate. Tests under repeatable conditions so candidate fibers, cables, and bundles evaluated for general robustness before subjected to expensive shock and vibration tests.
NASA Technical Reports Server (NTRS)
Withey, James V.
1986-01-01
The validity of real-time software is determined by its ability to execute on a computer within the time constraints of the physical system it is modeling. In many applications the time constraints are so critical that the details of process scheduling are elevated to the requirements analysis phase of the software development cycle. It is not uncommon to find specifications for a real-time cyclic executive program included to assumed in such requirements. It was found that prelininary designs structured around this implementation abscure the data flow of the real world system that is modeled and that it is consequently difficult and costly to maintain, update and reuse the resulting software. A cyclic executive is a software component that schedules and implicitly synchronizes the real-time software through periodic and repetitive subroutine calls. Therefore a design method is sought that allows the deferral of process scheduling to the later stages of design. The appropriate scheduling paradigm must be chosen given the performance constraints, the largest environment and the software's lifecycle. The concept of process inversion is explored with respect to the cyclic executive.
Dypas: A dynamic payload scheduler for shuttle missions
NASA Technical Reports Server (NTRS)
Davis, Stephen
1988-01-01
Decision and analysis systems have had broad and very practical application areas in the human decision making process. These software systems range from the help sections in simple accounting packages, to the more complex computer configuration programs. Dypas is a decision and analysis system that aids prelaunch shutlle scheduling, and has added functionality to aid the rescheduling done in flight. Dypas is written in Common Lisp on a Symbolics Lisp machine. Dypas differs from other scheduling programs in that it can draw its knowledge from different rule bases and apply them to different rule interpretation schemes. The system has been coded with Flavors, an object oriented extension to Common Lisp on the Symbolics hardware. This allows implementation of objects (experiments) to better match the problem definition, and allows a more coherent solution space to be developed. Dypas was originally developed to test a programmer's aptitude toward Common Lisp and the Symbolics software environment. Since then the system has grown into a large software effort with several programmers and researchers thrown into the effort. Dypas is currently using two expert systems and three inferencing procedures to generate a many object schedule. The paper will review the abilities of Dypas and comment on its functionality.
The checkpoint ordering problem
Hungerländer, P.
2017-01-01
Abstract We suggest a new variant of a row layout problem: Find an ordering of n departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality. PMID:29170574
NASA Technical Reports Server (NTRS)
Woodcock, Gordon R.
1990-01-01
The assembly, emplacement, checkout, operation, and maintenance of equipment on planetary surfaces are all part of expanding human presence out into the solar system. A single point design, a reference scenario, is presented for lunar base operations. An initial base, barely more than an output, which starts from nothing but then quickly grows to sustain people and produce rocket propellant. The study blended three efforts: conceptual design of all required surface systems; assessments of contemporary developments in robotics; and quantitative analyses of machine and human tasks, delivery and work schedules, and equipment reliability. What emerged was a new, integrated understanding of hot to make a lunar base happen. The overall goal of the concept developed was to maximize return, while minimizing cost and risk. The base concept uses solar power. Its primary industry is the production of liquid oxygen for propellant, which it extracts from native lunar regolith. Production supports four lander flights per year, and shuts down during the lunar nighttime while maintenance is performed.
NASA Astrophysics Data System (ADS)
Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.
2017-12-01
In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
A microcomputer network for control of a continuous mining machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiffbauer, W.H.
1993-12-31
This report details a microcomputer-based control and monitoring network that was developed in-house by the U.S. Bureau of Mines and installed on a continuous mining machine. The network consists of microcomputers that are connected together via a single twisted-pair cable. Each microcomputer was developed to provide a particular function in the control process. Machine-mounted microcomputers, in conjunction with the appropriate sensors, provide closed-loop control of the machine, navigation, and environmental monitoring. Off-the-machine microcomputers provide remote control of the machine, sensor status, and a connection to the network so that external computers can access network data and control the continuous miningmore » machine. Because of the network`s generic structure, it can be installed on most mining machines.« less
Single bus star connected reluctance drive and method
Fahimi, Babak; Shamsi, Pourya
2016-05-10
A system and methods for operating a switched reluctance machine includes a controller, an inverter connected to the controller and to the switched reluctance machine, a hysteresis control connected to the controller and to the inverter, a set of sensors connected to the switched reluctance machine and to the controller, the switched reluctance machine further including a set of phases the controller further comprising a processor and a memory connected to the processor, wherein the processor programmed to execute a control process and a generation process.
Tuset-Peiro, Pere; Vazquez-Gallego, Francisco; Alonso-Zarate, Jesus; Alonso, Luis; Vilajosana, Xavier
2014-07-24
Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC) protocols, Frame Slotted ALOHA (FSA) and Distributed Queuing (DQ). We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA.
Revisiting the Central Dogma One Molecule at a Time
Bustamante, Carlos; Cheng, Wei; Meija, Yara
2011-01-01
The faithful relay and timely expression of genetic information depend on specialized molecular machines, many of which function as nucleic acid translocases. The emergence over the last decade of single-molecule fluorescence detection and manipulation techniques with nm and Å resolution, and their application to the study of nucleic acid translocases are painting an increasingly sharp picture of the inner workings of these machines, the dynamics and coordination of their moving parts, their thermodynamic efficiency, and the nature of their transient intermediates. Here we present an overview of the main results arrived at by the application of single-molecule methods to the study of the main machines of the central dogma. PMID:21335233
Multi-core processing and scheduling performance in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J. M.; Evans, D.; Foulkes, S.
2012-01-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less
NASA Astrophysics Data System (ADS)
Gerde, Janice R.; Christens-Barry, William A.
2011-08-01
In a project to meet requirements for CBP Laboratory analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS), a hybrid metrology system comprising both optical and touch probe devices has been assembled. A unique requirement must be met: To identify the interface-typically obscured in samples of concern-of the "external surface area upper" (ESAU) and the sole without physically destroying the sample. The sample outer surface is determined by discrete point cloud coordinates obtained using laser scanner optical measurements. Measurements from the optically inaccessible insole region are obtained using a coordinate measuring machine (CMM). That surface similarly is defined by point cloud data. Mathematically, the individual CMM and scanner data sets are transformed into a single, common reference frame. Custom software then fits a polynomial surface to the insole data and extends it to intersect the mesh fitted to the outer surface point cloud. This line of intersection defines the required ESAU boundary, thus permitting further fractional area calculations to determine the percentage of materials present. With a draft method in place, and first-level method validation underway, we examine the transformation of the two dissimilar data sets into the single, common reference frame. We also will consider the six previously-identified potential error factors versus the method process. This paper reports our on-going work and discusses our findings to date.
Astronaut Office Scheduling System Software
NASA Technical Reports Server (NTRS)
Brown, Estevancio
2010-01-01
AOSS is a highly efficient scheduling application that uses various tools to schedule astronauts weekly appointment information. This program represents an integration of many technologies into a single application to facilitate schedule sharing and management. It is a Windows-based application developed in Visual Basic. Because the NASA standard office automation load environment is Microsoft-based, Visual Basic provides AO SS developers with the ability to interact with Windows collaboration components by accessing objects models from applications like Outlook and Excel. This also gives developers the ability to create newly customizable components that perform specialized tasks pertaining to scheduling reporting inside the application. With this capability, AOSS can perform various asynchronous tasks, such as gathering/ sending/ managing astronauts schedule information directly to their Outlook calendars at any time.
Scheduling multicore workload on shared multipurpose clusters
NASA Astrophysics Data System (ADS)
Templon, J. A.; Acosta-Silva, C.; Flix Molina, J.; Forti, A. C.; Pérez-Calero Yzquierdo, A.; Starink, R.
2015-12-01
With the advent of workloads containing explicit requests for multiple cores in a single grid job, grid sites faced a new set of challenges in workload scheduling. The most common batch schedulers deployed at HEP computing sites do a poor job at multicore scheduling when using only the native capabilities of those schedulers. This paper describes how efficient multicore scheduling was achieved at the sites the authors represent, by implementing dynamically-sized multicore partitions via a minimalistic addition to the Torque/Maui batch system already in use at those sites. The paper further includes example results from use of the system in production, as well as measurements on the dependence of performance (especially the ramp-up in throughput for multicore jobs) on node size and job size.
Landslide: Systematic Dynamic Race Detection in Kernel Space
2012-05-01
schedule_in_flight← true; CAUSE_TIMER_INTERRUPT(); end if end function Thread Scheduling Finally, the Landslide scheduler is responsible for managing ...child process vanish() simultaneously. • double_wait: Tests interactions of multiple waiters on a single child. • double_thread_fork: Tests for...conditions using Landslide. We describe them here. • Too many waiters allowed. Using the double_wait test case, Group 1 found a bug in which more threads
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Roetker, Nicholas S.; Yonker, James A.; Chang, Vicky; Roan, Carol L.; Herd, Pamela; Hauser, Taissa S.; Hauser, Robert M.
2013-01-01
Objectives. We examined depression within a multidimensional framework consisting of genetic, environmental, and sociobehavioral factors and, using machine learning algorithms, explored interactions among these factors that might better explain the etiology of depressive symptoms. Methods. We measured current depressive symptoms using the Center for Epidemiologic Studies Depression Scale (n = 6378 participants in the Wisconsin Longitudinal Study). Genetic factors were 78 single nucleotide polymorphisms (SNPs); environmental factors—13 stressful life events (SLEs), plus a composite proportion of SLEs index; and sociobehavioral factors—18 personality, intelligence, and other health or behavioral measures. We performed traditional SNP associations via logistic regression likelihood ratio testing and explored interactions with support vector machines and Bayesian networks. Results. After correction for multiple testing, we found no significant single genotypic associations with depressive symptoms. Machine learning algorithms showed no evidence of interactions. Naïve Bayes produced the best models in both subsets and included only environmental and sociobehavioral factors. Conclusions. We found no single or interactive associations with genetic factors and depressive symptoms. Various environmental and sociobehavioral factors were more predictive of depressive symptoms, yet their impacts were independent of one another. A genome-wide analysis of genetic alterations using machine learning methodologies will provide a framework for identifying genetic–environmental–sociobehavioral interactions in depressive symptoms. PMID:23927508
Microcomputer network for control of a continuous mining machine. Information circular/1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiffbauer, W.H.
1993-01-01
The paper details a microcomputer-based control and monitoring network that was developed in-house by the U.S. Bureau of Mines, and installed on a Joy 14 continuous mining machine. The network consists of microcomputers that are connected together via a single twisted pair cable. Each microcomputer was developed to provide a particular function in the control process. Machine-mounted microcomputers in conjunction with the appropriate sensors provide closed-loop control of the machine, navigation, and environmental monitoring. Off-the-machine microcomputers provide remote control of the machine, sensor status, and a connection to the network so that external computers can access network data and controlmore » the continuous mining machine. Although the network was installed on a Joy 14 continuous mining machine, its use extends beyond it. Its generic structure lends itself to installation onto most mining machine types.« less
NASA Astrophysics Data System (ADS)
Shprits, Y.; Zhelavskaya, I. S.; Kellerman, A. C.; Spasojevic, M.; Kondrashov, D. A.; Ghil, M.; Aseev, N.; Castillo Tibocha, A. M.; Cervantes Villa, J. S.; Kletzing, C.; Kurth, W. S.
2017-12-01
Increasing volume of satellite measurements requires deployment of new tools that can utilize such vast amount of data. Satellite measurements are usually limited to a single location in space, which complicates the data analysis geared towards reproducing the global state of the space environment. In this study we show how measurements can be combined by means of data assimilation and how machine learning can help analyze large amounts of data and can help develop global models that are trained on single point measurement. Data Assimilation: Manual analysis of the satellite measurements is a challenging task, while automated analysis is complicated by the fact that measurements are given at various locations in space, have different instrumental errors, and often vary by orders of magnitude. We show results of the long term reanalysis of radiation belt measurements along with fully operational real-time predictions using data assimilative VERB code. Machine Learning: We present application of the machine learning tools for the analysis of NASA Van Allen Probes upper-hybrid frequency measurements. Using the obtained data set we train a new global predictive neural network. The results for the Van Allen Probes based neural network are compared with historical IMAGE satellite observations. We also show examples of predictions of geomagnetic indices using neural networks. Combination of machine learning and data assimilation: We discuss how data assimilation tools and machine learning tools can be combine so that physics-based insight into the dynamics of the particular system can be combined with empirical knowledge of it's non-linear behavior.
Modelling machine ensembles with discrete event dynamical system theory
NASA Technical Reports Server (NTRS)
Hunter, Dan
1990-01-01
Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).
Analyzing Double Delays at Newark Liberty International Airport
NASA Technical Reports Server (NTRS)
Evans, Antony D.; Lee, Paul
2016-01-01
When weather or congestion impacts the National Airspace System, multiple different Traffic Management Initiatives can be implemented, sometimes with unintended consequences. One particular inefficiency that is commonly identified is in the interaction between Ground Delay Programs (GDPs) and time based metering of internal departures, or TMA scheduling. Internal departures under TMA scheduling can take large GDP delays, followed by large TMA scheduling delays, because they cannot be easily fitted into the overhead stream. In this paper we examine the causes of these double delays through an analysis of arrival operations at Newark Liberty International Airport (EWR) from June to August 2010. Depending on how the double delay is defined between 0.3 percent and 0.8 percent of arrivals at EWR experienced double delays in this period. However, this represents between 21 percent and 62 percent of all internal departures in GDP and TMA scheduling. A deep dive into the data reveals that two causes of high internal departure scheduling delays are upstream flights making up time between their estimated departure clearance times (EDCTs) and entry into time based metering, which undermines the sequencing and spacing underlying the flight EDCTs, and high demand on TMA, when TMA airborne metering delays are high. Data mining methods (currently) including logistic regression, support vector machines and K-nearest neighbors are used to predict the occurrence of double delays and high internal departure scheduling delays with accuracies up to 0.68. So far, key indicators of double delay and high internal departure scheduling delay are TMA virtual runway queue size, and the degree to which estimated runway demand based on TMA estimated times of arrival has changed relative to the estimated runway demand based on EDCTs. However, more analysis is needed to confirm this.
The relative effects on math performance of single- versus multiple-ratio schedules: a case study1
Lovitt, Tom C.; Esveldt, Karen A.
1970-01-01
This series of four experiments sought to assess the comparative effects of multiple- versus single-ratio schedules on a pupil's responding to mathematics materials. Experiment I, which alternated between single- and multiple-ratio contingencies, revealed that during the latter phase the subject responded at a higher rate. Similar findings were revealed by Exp. II. The third experiment, which manipulated frequency of reinforcement rather than multiple ratios, revealed that the alteration had a minimal effect on the subject's response rate. A final experiment, conducted to assess further the effects of multiple ratios, provided data similar to those of Exp. I and II. PMID:16795267
An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints
Rao, Yunqing; Qi, Dezhong; Li, Jinling
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem. PMID:24489491
A Decentralized Scheduling Policy for a Dynamically Reconfigurable Production System
NASA Astrophysics Data System (ADS)
Giordani, Stefano; Lujak, Marin; Martinelli, Francesco
In this paper, the static layout of a traditional multi-machine factory producing a set of distinct goods is integrated with a set of mobile production units - robots. The robots dynamically change their work position to increment the product rate of the different typologies of products in respect to the fluctuations of the demands and production costs during a given time horizon. Assuming that the planning time horizon is subdivided into a finite number of time periods, this particularly flexible layout requires the definition and the solution of a complex scheduling problem, involving for each period of the planning time horizon, the determination of the position of the robots, i.e., the assignment to the respective tasks in order to minimize production costs given the product demand rates during the planning time horizon.
An improved hierarchical genetic algorithm for sheet cutting scheduling with process constraints.
Rao, Yunqing; Qi, Dezhong; Li, Jinling
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony--hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.
Development of Knitted Warm Garments from Speciality Jute Yarns
NASA Astrophysics Data System (ADS)
Roy, Alok Nath
2013-09-01
Jute-polyester blended core and textured polyester multifilament cover spun-wrapped yarn was produced using existing jute spinning machines. The spun-wrapped yarn so produced show a reduction in hairiness up to 86.1 %, improvement in specific work of rupture up to 9.8 % and specific flexural rigidity up to 23.6 % over ordinary jute-polyester blended yarn. The knitted swatch produced out of these spun-wrapped yarn using seven gauge and nine gauge needle in both single jersey and double jersey knitting machines showed very good dimensional stability even after three washing. The two-ply and three-ply yarn produced from single spun-wrapped yarn can be easily used in knitting machines and also in hand-knitting for the production of sweaters. The thermal insulation value of the sweaters produced with jute-polyester blended spun-wrapped yarn is comparable with thermal insulation value of sweaters made from 100 % acrylic and 100 % wool. However, the hand-knitted sweaters showed higher thermal insulation value than the machine-knitted sweaters due to less packing of yarn in hand knitted structure as compared to machine knitting.
Reprographics Career Ladder AFSC 703X0.
1981-07-01
LINEUP AND REGISTER TABLES 39 BINDING MACHINES 36 FLOURESCENT LAMPS 36 WET PROCESS PLATEMAKERS 36 ELECTRIC STAPLERS 32 MANUAL PAPER CUTTERS 32...ELECTROSTATIC COPIERS/PLATEMAKERS 78% PAPER CUTTERS 57% ELECTRIC STAPLERS 47% BINDING MACHINES 42% SINGLE HEAD DRILLS 37% PADDING RACKS 31% PLATEMAKING...HEAD DRILLS 78% MANUAL PAPER CUTTERS 71% STATION COLLATORS 51% BINDING MACHiNES 46% ELECTRIC STAPLERS 46% PLATEMAKING CAMERAS 44% SADDLE STITCHERS 42
TELNET under Single-Connection TCP Specification
1976-02-02
Manager User Oriented Systems International Business Machines Corp. K54-282, Monterey and Cottle Roads San Jose, CA 95193 Dr. Leonard Y. Liu...Manager Computer Science International Business Machines Corp. K51-282, Monterey and Cottle Roads San Jose, CA 95193 Mr. Harry Reinstein... International Business Machines Corp. 1501 California Avenue Palo Alto, Ca 94303 Illinois, University of Mr. John D. Day University of Illinois Center for
An application of eddy current damping effect on single point diamond turning of titanium alloys
NASA Astrophysics Data System (ADS)
Yip, W. S.; To, S.
2017-11-01
Titanium alloys Ti6Al4V (TC4) have been popularly applied in many industries. They have superior material properties including an excellent strength-to-weight ratio and corrosion resistance. However, they are regarded as difficult to cut materials; serious tool wear, a high level of cutting vibration and low surface integrity are always involved in machining processes especially in ultra-precision machining (UPM). In this paper, a novel hybrid machining technology using an eddy current damping effect is firstly introduced in UPM to suppress machining vibration and improve the machining performance of titanium alloys. A magnetic field was superimposed on samples during single point diamond turning (SPDT) by exposing the samples in between two permanent magnets. When the titanium alloys were rotated within a magnetic field in the SPDT, an eddy current was generated through a stationary magnetic field inside the titanium alloys. An eddy current generated its own magnetic field with the opposite direction of the external magnetic field leading a repulsive force, compensating for the machining vibration induced by the turning process. The experimental results showed a remarkable improvement in cutting force variation, a significant reduction in adhesive tool wear and an extreme long chip formation in comparison to normal SPDT of titanium alloys, suggesting the enhancement of the machinability of titanium alloys using an eddy current damping effect. An eddy current damping effect was firstly introduced in the area of UPM to deliver the results of outstanding machining performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Song, Shuaiwen; Fu, Haohuan
2014-08-16
Support Vector Machine (SVM) has been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. To address the challenges above, we designed and implemented MICSVM, a highly efficient parallel SVM for x86 based multi-core and many core architectures,more » such as the Intel Ivy Bridge CPUs and Intel Xeon Phi coprocessor (MIC).« less
NASA Astrophysics Data System (ADS)
Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.
2017-06-01
Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.
Buffered coscheduling for parallel programming and enhanced fault tolerance
Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM
2006-01-31
A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors
Kocolas, Irene; Day, Kristen; King, Marta; Stevenson, Adam; Sheng, Xiaoming; Hobson, Wendy; Bruse, Jaime; Bale, James
2017-03-01
The effects of 2011 Accreditation Council on Graduate Medical Education (ACGME) duty hour standards on intern work hours, patient load, conference attendance, and sleep have not been fully determined. We prospectively compared intern work hours, patient numbers, conference attendance, sleep duration, pattern, and quality in a 2011 ACGME duty hour-compliant shift schedule with a 2003 ACGME duty hour-compliant call schedule at a single pediatric residency program. Interns were assigned to shift or call schedules during 4 alternate months in the winter of 2010-2011. Work hours, patient numbers, conference attendance, sleep duration, pattern, and quality were tracked. Interns worked significantly fewer hours per week on day (73.2 hours) or night (71.6 hours) shifts than during q4 call (79.6 hours; P < .01). During high census months, shift schedule interns cared for significantly more patients/day (8.1/day shift vs 6.2/call; P < .001) and attended significantly fewer conferences than call schedule interns. Night shift interns slept more hours per 24-hour period than call schedule interns (7.2 ± 0.5 vs 6.3 ± 0.9 hours; P < .05) and had more consistent sleep patterns. A shift schedule resulted in reduced intern work hours and improved sleep duration and pattern. Although intern didactic conference attendance declined significantly during high census months, opportunities for experiential learning remained robust with unchanged or increased intern patient numbers. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
ISS Habitability Data Collection and Preliminary Findings
NASA Technical Reports Server (NTRS)
Thaxton, Sherry (Principal Investigator); Greene, Maya; Schuh, Susan; Williams, Thomas; Archer, Ronald; Vasser, Katie
2017-01-01
Habitability is the relationship between an individual and their surroundings (i.e. the interplay of the person, machines, environment, and mission). The purpose of this study is to assess habitability and human factors on the ISS to better prepare for future long-duration space flights. Scheduled data collection sessions primarily require the use of iSHORT (iPad app) to capture near real-time habitability feedback and analyze vehicle layout and space utilization.
An overview of the artificial intelligence and expert systems component of RICIS
NASA Technical Reports Server (NTRS)
Feagin, Terry
1987-01-01
Artificial Intelligence and Expert Systems are the important component of RICIS (Research Institute and Information Systems) research program. For space applications, a number of problem areas that should be able to make good use of the above tools include: resource allocation and management, control and monitoring, environmental control and life support, power distribution, communications scheduling, orbit and attitude maintenance, redundancy management, intelligent man-machine interfaces and fault detection, isolation and recovery.
NASA Technical Reports Server (NTRS)
1979-01-01
The tests and procedures for the manned remote work station (MRWS) open cherry picker (OCP) development test article (DTA) are described to validate systems requirements and performance specifications. A development test program is outlined to evaluate key design issues and man/machine interfaces when the MRWS OCP is used in a shuttle support role of satellite servicing and in orbit construction of large structures.
1981-01-01
complex is a common psychological reaction to stress, which causes a chronic overactivity of the higher brain centers and the vagal-parasympathetic...continuous military operations in which other complex man-machine systems were being used. In addition, we found great interest in multidisciplinary...2. Adams, J. T. 1967. Fatigue in helicopter aircrews in combat. In: Aeromedical aspects of helicopter operations in the tac- tical situation
The 1990 Goddard Conference on Space Applications of Artificial Intelligence
NASA Technical Reports Server (NTRS)
Rash, James L. (Editor)
1990-01-01
The papers presented at the 1990 Goddard Conference on Space Applications of Artificial Intelligence are given. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The proceedings fall into the following areas: Planning and Scheduling, Fault Monitoring/Diagnosis, Image Processing and Machine Vision, Robotics/Intelligent Control, Development Methodologies, Information Management, and Knowledge Acquisition.
NASA Astrophysics Data System (ADS)
Jusoh, L. I.; Sulaiman, E.; Bahrim, F. S.; Kumar, R.
2017-08-01
Recent advancements have led to the development of flux switching machines (FSMs) with flux sources within the stators. The advantage of being a single-piece machine with a robust rotor structure makes FSM an excellent choice for speed applications. There are three categories of FSM, namely, the permanent magnet (PM) FSM, the field excitation (FE) FSM, and the hybrid excitation (HE) FSM. The PMFSM and the FEFSM have their respective PM and field excitation coil (FEC) as their key flux sources. Meanwhile, as the name suggests, the HEFSM has a combination of PM and FECs as the flux sources. The PMFSM is a simple and cheap machine, and it has the ability to control variable flux, which would be suitable for an electric bicycle. Thus, this paper will present a design comparison between an inner rotor and an outer rotor for a single-phase permanent magnet flux switching machine with 8S-10P, designed specifically for an electric bicycle. The performance of this machine was validated using the 2D- FEA. As conclusion, the outer-rotor has much higher torque approximately at 54.2% of an innerrotor PMFSM. From the comprehensive analysis of both designs it can be conclude that output performance is lower than the SRM and IPMSM design machine. But, it shows that the possibility to increase the design performance by using “deterministic optimization method”.
Single wheel testers, single track testers, and instrumented tractors
USDA-ARS?s Scientific Manuscript database
Single wheel testers and single track testers are used for determining tractive performance characteristics of tires and tracks. Instrumented tractors are useful in determining the tractive performance of tractors. These machines are also used for determining soil-tire and soil-track interactions,...
29 CFR 4041.45 - Distress termination notice.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., Single-Employer Plan Termination, with Schedule EA-D, Distress Termination Enrolled Actuary Certification... guaranteed benefits. Unless the enrolled actuary certifies, in the Schedule EA-D filed in accordance with... benefits or benefit liabilities. If the enrolled actuary certifies that the plan is sufficient either for...
29 CFR 4041.45 - Distress termination notice.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., Single-Employer Plan Termination, with Schedule EA-D, Distress Termination Enrolled Actuary Certification... guaranteed benefits. Unless the enrolled actuary certifies, in the Schedule EA-D filed in accordance with... benefits or benefit liabilities. If the enrolled actuary certifies that the plan is sufficient either for...
29 CFR 4041.45 - Distress termination notice.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., Single-Employer Plan Termination, with Schedule EA-D, Distress Termination Enrolled Actuary Certification... guaranteed benefits. Unless the enrolled actuary certifies, in the Schedule EA-D filed in accordance with... benefits or benefit liabilities. If the enrolled actuary certifies that the plan is sufficient either for...
29 CFR 4041.45 - Distress termination notice.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., Single-Employer Plan Termination, with Schedule EA-D, Distress Termination Enrolled Actuary Certification... guaranteed benefits. Unless the enrolled actuary certifies, in the Schedule EA-D filed in accordance with... benefits or benefit liabilities. If the enrolled actuary certifies that the plan is sufficient either for...
29 CFR 4041.45 - Distress termination notice.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Single-Employer Plan Termination, with Schedule EA-D, Distress Termination Enrolled Actuary Certification... guaranteed benefits. Unless the enrolled actuary certifies, in the Schedule EA-D filed in accordance with... benefits or benefit liabilities. If the enrolled actuary certifies that the plan is sufficient either for...
Lara-Tejero, María; Bewersdorf, Jörg; Galán, Jorge E.
2017-01-01
Type III protein secretion machines have evolved to deliver bacterially encoded effector proteins into eukaryotic cells. Although electron microscopy has provided a detailed view of these machines in isolation or fixed samples, little is known about their organization in live bacteria. Here we report the visualization and characterization of the Salmonella type III secretion machine in live bacteria by 2D and 3D single-molecule switching superresolution microscopy. This approach provided access to transient components of this machine, which previously could not be analyzed. We determined the subcellular distribution of individual machines, the stoichiometry of the different components of this machine in situ, and the spatial distribution of the substrates of this machine before secretion. Furthermore, by visualizing this machine in Salmonella mutants we obtained major insights into the machine’s assembly. This study bridges a major resolution gap in the visualization of this nanomachine and may serve as a paradigm for the examination of other bacterially encoded molecular machines. PMID:28533372
Surface and subsurface cracks characteristics of single crystal SiC wafer in surface machining
NASA Astrophysics Data System (ADS)
Qiusheng, Y.; Senkai, C.; Jisheng, P.
2015-03-01
Different machining processes were used in the single crystal SiC wafer machining. SEM was used to observe the surface morphology and a cross-sectional cleavages microscopy method was used for subsurface cracks detection. Surface and subsurface cracks characteristics of single crystal SiC wafer in abrasive machining were analysed. The results show that the surface and subsurface cracks system of single crystal SiC wafer in abrasive machining including radial crack, lateral crack and the median crack. In lapping process, material removal is dominated by brittle removal. Lots of chipping pits were found on the lapping surface. With the particle size becomes smaller, the surface roughness and subsurface crack depth decreases. When the particle size was changed to 1.5µm, the surface roughness Ra was reduced to 24.0nm and the maximum subsurface crack was 1.2µm. The efficiency of grinding is higher than lapping. Plastic removal can be achieved by changing the process parameters. Material removal was mostly in brittle fracture when grinding with 325# diamond wheel. Plow scratches and chipping pits were found on the ground surface. The surface roughness Ra was 17.7nm and maximum subsurface crack depth was 5.8 µm. When grinding with 8000# diamond wheel, the material removal was in plastic flow. Plastic scratches were found on the surface. A smooth surface of roughness Ra 2.5nm without any subsurface cracks was obtained. Atomic scale removal was possible in cluster magnetorheological finishing with diamond abrasive size of 0.5 µm. A super smooth surface eventually obtained with a roughness of Ra 0.4nm without any subsurface crack.
Comparison between extreme learning machine and wavelet neural networks in data classification
NASA Astrophysics Data System (ADS)
Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri
2017-03-01
Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.
Single crystal diamond lapping procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grayson, R.A.
A facility capable of resharpening quality cutting edges on single crystal diamond cutting tools was needed as the demand in precision machining of special optical surfaces became a common occurrence here at Lawrence Livermore National Laboratory. A specially constructed lapping machine using an air bearing spindle was built to achieve the required edge quality. The basic design for this lap was taken out of a technical report by W.L. Duke and R.T. Lovell of Oak Ridge Y-12 Plant Union Carbide Corp. We have also purchased two commercially built lapping machines recommended to us by Mr. Cory A. Knottenbelt, formerly ofmore » R.C.A. Diamond Lapping Facility, in Indianapolis, Indiana, now doing state-of-the-art polishing and relapping at LLNL facilities.« less
Flotation machine and process for removing impurities from coals
Szymocha, K.; Ignasiak, B.; Pawlak, W.; Kulik, C.; Lebowitz, H.E.
1995-12-05
The present invention is directed to a type of flotation machine that combines three separate operations in a single unit. The flotation machine is a hydraulic separator that is capable of reducing the pyrite and other mineral matter content of a coal. When the hydraulic separator is used with a flotation system, the pyrite and certain other mineral particles that may have been entrained by hydrodynamic forces associated with conventional flotation machines and/or by the attachment forces associated with the formation of microagglomerates are washed and separated from the coal. 4 figs.
Flotation machine and process for removing impurities from coals
Szymocha, Kazimierz; Ignasiak, Boleslaw; Pawlak, Wanda; Kulik, Conrad; Lebowitz, Howard E.
1995-01-01
The present invention is directed to a type of flotation machine that combines three separate operations in a single unit. The flotation machine is a hydraulic separator that is capable of reducing the pyrite and other mineral matter content of a coal. When the hydraulic separator is used with a flotation system, the pyrite and certain other minerals particles that may have been entrained by hydrodynamic forces associated with conventional flotation machines and/or by the attachment forces associated with the formation of microagglomerates are washed and separated from the coal.
Flotation machine and process for removing impurities from coals
Szymocha, K.; Ignasiak, B.; Pawlak, W.; Kulik, C.; Lebowitz, H.E.
1997-02-11
The present invention is directed to a type of flotation machine that combines three separate operations in a single unit. The flotation machine is a hydraulic separator that is capable of reducing the pyrite and other mineral matter content of a coal. When the hydraulic separator is used with a flotation system, the pyrite and certain other minerals particles that may have been entrained by hydrodynamic forces associated with conventional flotation machines and/or by the attachment forces associated with the formation of microagglomerates are washed and separated from the coal. 4 figs.
Flotation machine and process for removing impurities from coals
Szymocha, Kazimierz; Ignasiak, Boleslaw; Pawlak, Wanda; Kulik, Conrad; Lebowitz, Howard E.
1997-01-01
The present invention is directed to a type of flotation machine that combines three separate operations in a single unit. The flotation machine is a hydraulic separator that is capable of reducing the pyrite and other mineral matter content of a coal. When the hydraulic separator is used with a flotation system, the pyrite and certain other minerals particles that may have been entrained by hydrodynamic forces associated with conventional flotation machines and/or by the attachment forces associated with the formation of microagglomerates are washed and separated from the coal.
Acoustic emission from single point machining: Part 2, Signal changes with tool wear
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heiple, C.R.; Carpenter, S.H.; Armentrout, D.L.
1989-01-01
Changes in acoustic emission signal characteristics with tool wear were monitored during single point machining of 4340 steel and Ti-6Al-4V heat treated to several strength levels, 606l-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, 410 stainless steel, lead, and teflon. No signal characteristic changed in the same way with tool wear for all materials tested. A single change in a particular AE signal characteristic with tool wear valid for all materials probably does not exist. Nevertheless, changes in various signal characteristic with wear for a given material may be sufficient to be used to monitor tool wear.
Simulation and Experimental Study on Surface Formation Mechanism in Machining of SiCp/Al Composites
NASA Astrophysics Data System (ADS)
Du, Jinguang; Zhang, Haizhen; He, Wenbin; Ma, Jun; Ming, Wuyi; Cao, Yang
2018-03-01
To intuitively reveal the surface formation mechanism in machining of SiCp/Al composites, in this paper the removal mode of reinforced particle and aluminum matrix, and their influence on surface formation mechanism were analyzed by single diamond grit cutting simulation and single diamond grit scratch experiment. Simulation and experiment results show that when the depth of cut is small, the scratched surface of the workpiece is relatively smooth; however, there are also irregular pits on the machined surface. When increasing the depth of cut, there are many obvious laminar structures on the scratched surface, and the surface appearance becomes coarser. When the cutting speed is small, the squeezing action of abrasive grit on SiC particles plays a dominant role in the extrusion of SiC particles. When increasing the cutting speed, SiC particles also occur broken or fractured; but the machined surface becomes smooth. When machining SiCp/Al composites, the SiC may happen in such removal ways, such as fracture, debonding, broken, sheared, pulled into and pulled out, etc. By means of reasonably developing micro cutting finite element simulation model of SiCp/Al composites could be used to analyze the surface formation process and particle removal way in different machining conditions.
DTS: Building custom, intelligent schedulers
NASA Technical Reports Server (NTRS)
Hansson, Othar; Mayer, Andrew
1994-01-01
DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.
30 CFR 18.80 - Approval of machines assembled with certified or explosion-proof components.
Code of Federal Regulations, 2014 CFR
2014-07-01
... approved machines will be accepted in lieu of certified components. (b) A single layout drawing (see Figure... approval. A design of the approval plate will accompany the notification of approval. (Refer to §§ 18.10...
A STUDY OF DISLOCATION STRUCTURE OF SUBBOUNDARIES IN MOLYBDENUM SINGLE CRYSTALS,
MOLYBDENUM, *DISLOCATIONS), GRAIN STRUCTURES(METALLURGY), SINGLE CRYSTALS, ZONE MELTING, ELECTRON BEAM MELTING, GRAIN BOUNDARIES, MATHEMATICAL ANALYSIS, ETCHED CRYSTALS, ETCHING, ELECTROEROSIVE MACHINING, CHINA
Experiments on PIM in Support of the Development of IVA Technology for Radiography at AWE
NASA Astrophysics Data System (ADS)
Clough, Stephen G.; Thomas, Kenneth J.; Williamson, Mark C.; Phillips, Martin J.; Smith, Ian D.; Bailey, Vernon L.; Kishi, Hiroshi J.; Maenchen, John E.; Johnson, David L.
2002-12-01
The PIM machine has been designed and constructed at AWE as part of a program to investigate IVA technology for radiographic applications. PIM, as originally constructed, was a prospective single module of a 14 MV, 100 kA, ten module machine. The design of such a machine is a primary goal of the program as several are required to provide multi-axis radiography in a new Hydrodynamics Research Facility (HRF). Another goal is to design lower voltage machines (ranging from 1 to 5 MV) utilizing PIM style components. The original PIM machine consisted of a single inductive cavity pulsed by a 10 ohm water dielectric Blumlein pulse forming line (PFL) charged by a Marx generator. These components successfully achieved their design voltages and data on the prepulse was obtained showing it to be worse than expected. This information provided a basis for design work on the 14 MV HRF IVA, carried out by Titan-PSD, resulting in a proposal for a prepulse switch, a prototype of which should be installed on PIM by the end of this year. The original single, coaxial switch used to initiate the Blumlein has been replaced by a prototype laser triggered switching arrangement, also designed by Titan-PSD, which it was desired to test prior to its eventual use in the HRF. Despite problems with the laser, which will necessitate further experiments, it was determined that laser triggering with low jitter was occurring. A split oil co-ax feed has now been used to install a second cavity, in parallel with the first, on the PIM Blumlein. This two cavity configuration provides a prototype for future radiographic machines operating at up to 3 MV and a test facility for diode research.
NASA Astrophysics Data System (ADS)
Debra, Daniel B.; Hesselink, Lambertus; Binford, Thomas
1990-05-01
There are a number of fields that require or can use to advantage very high precision in machining. For example, further development of high energy lasers and x ray astronomy depend critically on the manufacture of light weight reflecting metal optical components. To fabricate these optical components with machine tools they will be made of metal with mirror quality surface finish. By mirror quality surface finish, it is meant that the dimensions tolerances on the order of 0.02 microns and surface roughness of 0.07. These accuracy targets fall in the category of ultra precision machining. They cannot be achieved by a simple extension of conventional machining processes and techniques. They require single crystal diamond tools, special attention to vibration isolation, special isolation of machine metrology, and on line correction of imperfection in the motion of the machine carriages on their way.
A comparison of machine learning and Bayesian modelling for molecular serotyping.
Newton, Richard; Wernisch, Lorenz
2017-08-11
Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological insights, which we illustrate with an example.
NASA Astrophysics Data System (ADS)
Mazlan, Mohamed Mubin Aizat; Sulaiman, Erwan; Husin, Zhafir Aizat; Othman, Syed Muhammad Naufal Syed; Khan, Faisal
2015-05-01
In hybrid excitation machines (HEMs), there are two main flux sources which are permanent magnet (PM) and field excitation coil (FEC). These HEMs have better features when compared with the interior permanent magnet synchronous machines (IPMSM) used in conventional hybrid electric vehicles (HEVs). Since all flux sources including PM, FEC and armature coils are located on the stator core, the rotor becomes a single piece structure similar with switch reluctance machine (SRM). The combined flux generated by PM and FEC established more excitation fluxes that are required to produce much higher torque of the motor. In addition, variable DC FEC can control the flux capabilities of the motor, thus the machine can be applied for high-speed motor drive system. In this paper, the comparisons of single-phase 8S-4P outer and inner rotor hybrid excitation flux switching machine (HEFSM) are presented. Initially, design procedures of the HEFSM including parts drawing, materials and conditions setting, and properties setting are explained. Flux comparisons analysis is performed to investigate the flux capabilities at various current densities. Then the flux linkages of PM with DC FEC of various DC FEC current densities are examined. Finally torque performances are analyzed at various armature and FEC current densities for both designs. As a result, the outer-rotor HEFSM has higher flux linkage of PM with DC FEC and higher average torque of approximately 10% when compared with inner-rotor HEFSM.
Phase I Design for Completely or Partially Ordered Treatment Schedules
Wages, Nolan A.; O’Quigley, John; Conaway, Mark R.
2013-01-01
The majority of methods for the design of Phase I trials in oncology are based upon a single course of therapy, yet in actual practice it may be the case that there is more than one treatment schedule for any given dose. Therefore, the probability of observing a dose-limiting toxicity (DLT) may depend upon both the total amount of the dose given, as well as the frequency with which it is administered. The objective of the study then becomes to find an acceptable combination of both dose and schedule. Past literature on designing these trials has entailed the assumption that toxicity increases monotonically with both dose and schedule. In this article, we relax this assumption for schedules and present a dose-schedule finding design that can be generalized to situations in which we know the ordering between all schedules and those in which we do not. We present simulation results that compare our method to other suggested dose-schedule finding methodology. PMID:24114957
Blanco, Mario R.; Martin, Joshua S.; Kahlscheuer, Matthew L.; Krishnan, Ramya; Abelson, John; Laederach, Alain; Walter, Nils G.
2016-01-01
The spliceosome is the dynamic RNA-protein machine responsible for faithfully splicing introns from precursor messenger RNAs (pre-mRNAs). Many of the dynamic processes required for the proper assembly, catalytic activation, and disassembly of the spliceosome as it acts on its pre-mRNA substrate remain poorly understood, a challenge that persists for many biomolecular machines. Here, we developed a fluorescence-based Single Molecule Cluster Analysis (SiMCAn) tool to dissect the manifold conformational dynamics of a pre-mRNA through the splicing cycle. By clustering common dynamic behaviors derived from selectively blocked splicing reactions, SiMCAn was able to identify signature conformations and dynamic behaviors of multiple ATP-dependent intermediates. In addition, it identified a conformation adopted late in splicing by a 3′ splice site mutant, invoking a mechanism for substrate proofreading. SiMCAn presents a novel framework for interpreting complex single molecule behaviors that should prove widely useful for the comprehensive analysis of a plethora of dynamic cellular machines. PMID:26414013
Evolution of stacking fault tetrahedral and work hardening effect in copper single crystals
NASA Astrophysics Data System (ADS)
Liu, Hai Tao; Zhu, Xiu Fu; Sun, Ya Zhou; Xie, Wen Kun
2017-11-01
Stacking fault tetrahedral (SFT), generated in machining of copper single crystal as one type of subsurface defects, has significant influence on the performance of workpiece. In this study, molecular dynamics (MD) simulation is used to investigate the evolution of stacking fault tetrahedral in nano-cutting of copper single crystal. The result shows that SFT is nucleated at the intersection of differently oriented stacking fault (SF) planes and SFT evolves from the preform only containing incomplete surfaces into a solid defect. The evolution of SFT contains several stress fluctuations until the complete formation. Nano-indentation simulation is then employed on the machined workpiece from nano-cutting, through which the interaction between SFT and later-formed dislocations in subsurface is studied. In the meanwhile, force-depth curves obtained from nano-indentation on pristine and machined workpieces are compared to analyze the mechanical properties. By simulation of nano-cutting and nano-indentation, it is verified that SFT is a reason of the work hardening effect.
Single-molecule protein unfolding and translocation by an ATP-fueled proteolytic machine
Aubin-Tam, Marie-Eve; Olivares, Adrian O.; Sauer, Robert T.; Baker, Tania A.; Lang, Matthew J.
2011-01-01
All cells employ ATP-powered proteases for protein-quality control and regulation. In the ClpXP protease, ClpX is a AAA+ machine that recognizes specific protein substrates, unfolds these molecules, and then translocates the denatured polypeptide through a central pore and into ClpP for degradation. Here, we use optical-trapping nanometry to probe the mechanics of enzymatic unfolding and translocation of single molecules of a multidomain substrate. Our experiments demonstrate the capacity of ClpXP and ClpX to perform mechanical work under load, reveal very fast and highly cooperative unfolding of individual substrate domains, suggest a translocation step size of 5–8 amino acids, and support a power-stroke model of denaturation in which successful enzyme-mediated unfolding of stable domains requires coincidence between mechanical pulling by the enzyme and a transient stochastic reduction in protein stability. We anticipate that single-molecule studies of the mechanical properties of other AAA+ proteolytic machines will reveal many shared features with ClpXP. PMID:21496645
Accommodating to Restrictions on Residents' Working Hours.
ERIC Educational Resources Information Center
Foster, Henry W., Jr.; Seltzer, Vicki L.
1991-01-01
In response to New York State legislation limiting house staff working hours, a survey of obstetrics and gynecology resident programs (n=26) was conducted. Results were used to construct a prototype call schedule and a hypothetical monthly schedule indicating how a single resident would function without violating any state regulations. (MSE)
76 FR 39039 - Establishment of a New Drug Code for Marihuana Extract
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-05
... that have been derived from any plant of the genus cannabis and which contain cannabinols and... Nations Conventions on international drug control treat extracts from the cannabis plant differently than.... Cannabis and cannabis resin are listed in both schedule IV and schedule I of the Single Convention...
The Effects of Interval Duration on Temporal Tracking and Alternation Learning
ERIC Educational Resources Information Center
Ludvig, Elliot A.; Staddon, John E. R.
2005-01-01
On cyclic-interval reinforcement schedules, animals typically show a postreinforcement pause that is a function of the immediately preceding time interval ("temporal tracking"). Animals, however, do not track single-alternation schedules--when two different intervals are presented in strict alternation on successive trials. In this experiment,…
Producing Production Level Tooling in Prototype Timing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mc Hugh, Kevin Matthew; Knirsch, J.
A new rapid solidification process machine will be able to produce eight-inch diameter by six-inch thick finished cavities at the rate of one per hour - a rate that will change the tooling industry dramatically. Global Metal Technologies, Inc. (GMTI) (Solon, OH) has signed an exclusive license with Idaho National Engineered and Environmental Laboratories (INEEL) (Idaho Falls, ID) for the development and commercialization of the rapid solidification process (RSP tooling). The first production machine is scheduled for delivery in July 2001. The RSP tooling process is a method of producing production level tooling in prototype timing. The process' inventor, Kevinmore » McHugh, describes it as a rapid solidification method, which differentiates it from the standard spray forming methods. RSP itself is relatively straightforward. Molten metal is sprayed against the ceramic pattern, replicating the pattern's contours, surface texture and details. After spraying, the molten tool steel is cooled at room temperature and separated from the pattern. The irregular periphery of the freshly sprayed insert is squared off, either by machining or, in the case of harder tool steels, by wire EDM. XX« less
On Why It Is Impossible to Prove that the BDX90 Dispatcher Implements a Time-sharing System
NASA Technical Reports Server (NTRS)
Boyer, R. S.; Moore, J. S.
1983-01-01
The Software Implemented Fault Tolerance SIFT system, is written in PASCAL except for about a page of machine code. The SIFT system implements a small time sharing system in which PASCAL programs for separate application tasks are executed according to a schedule with real time constraints. The PASCAL language has no provision for handling the notion of an interrupt such as the B930 clock interrupt. The PASCAL language also lacks the notion of running a PASCAL subroutine for a given amount of time, suspending it, saving away the suspension, and later activating the suspension. Machine code was used to overcome these inadequacies of PASCAL. Code which handles clock interrupts and suspends processes is called a dispatcher. The time sharing/virtual machine idea is completely destroyed by the reconfiguration task. After termination of the reconfiguration task, the tasks run by the dispatcher have no relation to those run before reconfiguration. It is impossible to view the dispatcher as a time-sharing system implementing virtual BDX930s running concurrently when one process can wipe out the others.
Testing and Validation of Timing Properties for High Speed Digital Cameras - A Best Practices Guide
2016-07-27
a five year plan to begin replacing its inventory of antiquated film and video systems with more modern and capable digital systems. As evidenced in...installation, testing, and documentation of DITCS. If shop support can be accelerated due to shifting mission priorities, this schedule can likely...assistance from the machine shop , welding shop , paint shop , and carpenter shop . Testing the DITCS system will require a KTM with digital cameras and
Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor
2014-01-01
The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618
Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor
2014-11-03
The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels.
Optimizing Mars Airplane Trajectory with the Application Navigation System
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Riley, Derek
2004-01-01
Planning complex missions requires a number of programs to be executed in concert. The Application Navigation System (ANS), developed in the NAS Division, can execute many interdependent programs in a distributed environment. We show that the ANS simplifies user effort and reduces time in optimization of the trajectory of a martian airplane. We use a software package, Cart3D, to evaluate trajectories and a shortest path algorithm to determine the optimal trajectory. ANS employs the GridScape to represent the dynamic state of the available computer resources. Then, ANS uses a scheduler to dynamically assign ready task to machine resources and the GridScape for tracking available resources and forecasting completion time of running tasks. We demonstrate system capability to schedule and run the trajectory optimization application with efficiency exceeding 60% on 64 processors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yinan; Shi Handuo; Xiong Zhaoxi
We present a unified universal quantum cloning machine, which combines several different existing universal cloning machines together, including the asymmetric case. In this unified framework, the identical pure states are projected equally into each copy initially constituted by input and one half of the maximally entangled states. We show explicitly that the output states of those universal cloning machines are the same. One importance of this unified cloning machine is that the cloning procession is always the symmetric projection, which reduces dramatically the difficulties for implementation. Also, it is found that this unified cloning machine can be directly modified tomore » the general asymmetric case. Besides the global fidelity and the single-copy fidelity, we also present all possible arbitrary-copy fidelities.« less
The Smart Aerial Release Machine, a Universal System for Applying the Sterile Insect Technique
Mubarqui, Ruben Leal; Perez, Rene Cano; Kladt, Roberto Angulo; Lopez, Jose Luis Zavala; Parker, Andrew; Seck, Momar Talla; Sall, Baba; Bouyer, Jérémy
2014-01-01
Background Beyond insecticides, alternative methods to control insect pests for agriculture and vectors of diseases are needed. Management strategies involving the mass-release of living control agents have been developed, including genetic control with sterile insects and biological control with parasitoids, for which aerial release of insects is often required. Aerial release in genetic control programmes often involves the use of chilled sterile insects, which can improve dispersal, survival and competitiveness of sterile males. Currently available means of aerially releasing chilled fruit flies are however insufficiently precise to ensure homogeneous distribution at low release rates and no device is available for tsetse. Methodology/Principal Findings Here we present the smart aerial release machine, a new design by the Mubarqui Company, based on the use of vibrating conveyors. The machine is controlled through Bluetooth by a tablet with Android Operating System including a completely automatic guidance and navigation system (MaxNav software). The tablet is also connected to an online relational database facilitating the preparation of flight schedules and automatic storage of flight reports. The new machine was compared with a conveyor release machine in Mexico using two fruit flies species (Anastrepha ludens and Ceratitis capitata) and we obtained better dispersal homogeneity (% of positive traps, p<0.001) for both species and better recapture rates for Anastrepha ludens (p<0.001), especially at low release densities (<1500 per ha). We also demonstrated that the machine can replace paper boxes for aerial release of tsetse in Senegal. Conclusions/Significance This technology limits damages to insects and allows a large range of release rates from 10 flies/km2 for tsetse flies up to 600 000 flies/km2 for fruit flies. The potential of this machine to release other species like mosquitoes is discussed. Plans and operating of the machine are provided to allow its use worldwide. PMID:25036274
The smart aerial release machine, a universal system for applying the sterile insect technique.
Leal Mubarqui, Ruben; Perez, Rene Cano; Kladt, Roberto Angulo; Lopez, Jose Luis Zavala; Parker, Andrew; Seck, Momar Talla; Sall, Baba; Bouyer, Jérémy
2014-01-01
Beyond insecticides, alternative methods to control insect pests for agriculture and vectors of diseases are needed. Management strategies involving the mass-release of living control agents have been developed, including genetic control with sterile insects and biological control with parasitoids, for which aerial release of insects is often required. Aerial release in genetic control programmes often involves the use of chilled sterile insects, which can improve dispersal, survival and competitiveness of sterile males. Currently available means of aerially releasing chilled fruit flies are however insufficiently precise to ensure homogeneous distribution at low release rates and no device is available for tsetse. Here we present the smart aerial release machine, a new design by the Mubarqui Company, based on the use of vibrating conveyors. The machine is controlled through Bluetooth by a tablet with Android Operating System including a completely automatic guidance and navigation system (MaxNav software). The tablet is also connected to an online relational database facilitating the preparation of flight schedules and automatic storage of flight reports. The new machine was compared with a conveyor release machine in Mexico using two fruit flies species (Anastrepha ludens and Ceratitis capitata) and we obtained better dispersal homogeneity (% of positive traps, p<0.001) for both species and better recapture rates for Anastrepha ludens (p<0.001), especially at low release densities (<1500 per ha). We also demonstrated that the machine can replace paper boxes for aerial release of tsetse in Senegal. This technology limits damages to insects and allows a large range of release rates from 10 flies/km2 for tsetse flies up to 600,000 flies/km2 for fruit flies. The potential of this machine to release other species like mosquitoes is discussed. Plans and operating of the machine are provided to allow its use worldwide.
The ergonomics of vertical turret lathe operation.
Pratt, F M; Corlett, E N
1970-12-01
A study of the work load of 14 vertical turret lathe operators engaged on different work tasks in two factories is reported. For eight of these workers continuous heart rate recordings were made throughout the day. It was shown that in four cases improved technology was unlikely to lead to higher output and certain aspects of posture and equipment manipulation were major contributors to the limitations on increased output. The role of the work-rest schedule in increasing work loads was also demonstrated. Improvements in technology and methods to reduce the extent of certain work loads to enable heavy work to be done in shorter periods followed by light work or rest periods are given as means to modify and improve the output of these machines. Finally, the direction for the development of a predictive model for man-machine matching is introduced.
NASA Astrophysics Data System (ADS)
Li, Guoliang; Xing, Lining; Chen, Yingwu
2017-11-01
The autonomicity of self-scheduling on Earth observation satellite and the increasing scale of satellite network attract much attention from researchers in the last decades. In reality, the limited onboard computational resource presents challenge for the online scheduling algorithm. This study considered online scheduling problem for a single autonomous Earth observation satellite within satellite network environment. It especially addressed that the urgent tasks arrive stochastically during the scheduling horizon. We described the problem and proposed a hybrid online scheduling mechanism with revision and progressive techniques to solve this problem. The mechanism includes two decision policies, a when-to-schedule policy combining periodic scheduling and critical cumulative number-based event-driven rescheduling, and a how-to-schedule policy combining progressive and revision approaches to accommodate two categories of task: normal tasks and urgent tasks. Thus, we developed two heuristic (re)scheduling algorithms and compared them with other generally used techniques. Computational experiments indicated that the into-scheduling percentage of urgent tasks in the proposed mechanism is much higher than that in periodic scheduling mechanism, and the specific performance is highly dependent on some mechanism-relevant and task-relevant factors. For the online scheduling, the modified weighted shortest imaging time first and dynamic profit system benefit heuristics outperformed the others on total profit and the percentage of successfully scheduled urgent tasks.
Olaya-Castro, Alexandra; Johnson, Neil F; Quiroga, Luis
2005-03-25
We propose a physically realizable machine which can either generate multiparticle W-like states, or implement high-fidelity 1-->M (M=1,2,...infinity) anticloning of an arbitrary qubit state, in a single step. This universal machine acts as a catalyst in that it is unchanged after either procedure, effectively resetting itself for its next operation. It possesses an inherent immunity to decoherence. Most importantly in terms of practical multiparty quantum communication, the machine's robustness in the presence of decoherence actually increases as the number of qubits M increases.
The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.
Nowak, Markus; Castellini, Claudio
2016-01-01
Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.
5 CFR 532.279 - Special wage schedules for printing positions.
Code of Federal Regulations, 2010 CFR
2010-01-01
...-Single Color) 5 Platemaker (Single Color) 5 Film Assembler-Stripper (Partial and Composite Flats) 7... Operator (35-45 and Larger) 10 Offset Photographer (Halftone) 10 Negative Engraver 10 Bookbinder 10...
Ji, Shijun; Sun, Changrui; Zhao, Ji; Liang, Fusheng
2015-01-01
The aim of this paper is to compare the mechanical property and machinability of Polyetheretherketone (PEEK) and 30 wt% carbon-fibers reinforced Polyetheretherketone (PEEK CF 30). The method of nano-indentation is used to investigate the microscopic mechanical property. The evolution of load with displacement, Young’s modulus curves and hardness curves are analyzed. The results illustrate that the load-displacement curves of PEEK present better uniformity, and the variation of Young’s modulus and hardness of PEEK both change smaller at the experimental depth. The machinability between PEEK and PEEK CF 30 are also compared by the method of single-point diamond turning (SPDT), and the peak-to-valley value (PV) and surface roughness (Ra) are obtained to evaluate machinability of the materials after machining. The machining results show that PEEK has smaller PV and Ra, which means PEEK has superior machinability. PMID:28793428
Ji, Shijun; Sun, Changrui; Zhao, Ji; Liang, Fusheng
2015-07-07
The aim of this paper is to compare the mechanical property and machinability of Polyetheretherketone (PEEK) and 30 wt% carbon-fibers reinforced Polyetheretherketone (PEEK CF 30). The method of nano-indentation is used to investigate the microscopic mechanical property. The evolution of load with displacement, Young's modulus curves and hardness curves are analyzed. The results illustrate that the load-displacement curves of PEEK present better uniformity, and the variation of Young's modulus and hardness of PEEK both change smaller at the experimental depth. The machinability between PEEK and PEEK CF 30 are also compared by the method of single-point diamond turning (SPDT), and the peak-to-valley value (PV) and surface roughness (Ra) are obtained to evaluate machinability of the materials after machining. The machining results show that PEEK has smaller PV and Ra, which means PEEK has superior machinability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qinghui; Chan, Maria F.; Burman, Chandra
2013-12-15
Purpose: Setting a proper margin is crucial for not only delivering the required radiation dose to a target volume, but also reducing the unnecessary radiation to the adjacent organs at risk. This study investigated the independent one-dimensional symmetric and asymmetric margins between the clinical target volume (CTV) and the planning target volume (PTV) for linac-based single-fraction frameless stereotactic radiosurgery (SRS).Methods: The authors assumed a Dirac delta function for the systematic error of a specific machine and a Gaussian function for the residual setup errors. Margin formulas were then derived in details to arrive at a suitable CTV-to-PTV margin for single-fractionmore » frameless SRS. Such a margin ensured that the CTV would receive the prescribed dose in 95% of the patients. To validate our margin formalism, the authors retrospectively analyzed nine patients who were previously treated with noncoplanar conformal beams. Cone-beam computed tomography (CBCT) was used in the patient setup. The isocenter shifts between the CBCT and linac were measured for a Varian Trilogy linear accelerator for three months. For each plan, the authors shifted the isocenter of the plan in each direction by ±3 mm simultaneously to simulate the worst setup scenario. Subsequently, the asymptotic behavior of the CTV V{sub 80%} for each patient was studied as the setup error approached the CTV-PTV margin.Results: The authors found that the proper margin for single-fraction frameless SRS cases with brain cancer was about 3 mm for the machine investigated in this study. The isocenter shifts between the CBCT and the linac remained almost constant over a period of three months for this specific machine. This confirmed our assumption that the machine systematic error distribution could be approximated as a delta function. This definition is especially relevant to a single-fraction treatment. The prescribed dose coverage for all the patients investigated was 96.1%± 5.5% with an extreme 3-mm setup error in all three directions simultaneously. It was found that the effect of the setup error on dose coverage was tumor location dependent. It mostly affected the tumors located in the posterior part of the brain, resulting in a minimum coverage of approximately 72%. This was entirely due to the unique geometry of the posterior head.Conclusions: Margin expansion formulas were derived for single-fraction frameless SRS such that the CTV would receive the prescribed dose in 95% of the patients treated for brain cancer. The margins defined in this study are machine-specific and account for nonzero mean systematic error. The margin for single-fraction SRS for a group of machines was also derived in this paper.« less
Acoustic emission from single point machining: Part 2, Signal changes with tool wear. Revised
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heiple, C.R.; Carpenter, S.H.; Armentrout, D.L.
1989-12-31
Changes in acoustic emission signal characteristics with tool wear were monitored during single point machining of 4340 steel and Ti-6Al-4V heat treated to several strength levels, 606l-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, 410 stainless steel, lead, and teflon. No signal characteristic changed in the same way with tool wear for all materials tested. A single change in a particular AE signal characteristic with tool wear valid for all materials probably does not exist. Nevertheless, changes in various signal characteristic with wear for a given material may be sufficient to be used to monitor tool wear.
Decision-theoretic control of EUVE telescope scheduling
NASA Technical Reports Server (NTRS)
Hansson, Othar; Mayer, Andrew
1993-01-01
This paper describes a decision theoretic scheduler (DTS) designed to employ state-of-the-art probabilistic inference technology to speed the search for efficient solutions to constraint-satisfaction problems. Our approach involves assessing the performance of heuristic control strategies that are normally hard-coded into scheduling systems and using probabilistic inference to aggregate this information in light of the features of a given problem. The Bayesian Problem-Solver (BPS) introduced a similar approach to solving single agent and adversarial graph search patterns yielding orders-of-magnitude improvement over traditional techniques. Initial efforts suggest that similar improvements will be realizable when applied to typical constraint-satisfaction scheduling problems.
Experiments with a decision-theoretic scheduler
NASA Technical Reports Server (NTRS)
Hansson, Othar; Holt, Gerhard; Mayer, Andrew
1992-01-01
This paper describes DTS, a decision-theoretic scheduler designed to employ state-of-the-art probabilistic inference technology to speed the search for efficient solutions to constraint-satisfaction problems. Our approach involves assessing the performance of heuristic control strategies that are normally hard-coded into scheduling systems, and using probabilistic inference to aggregate this information in light of features of a given problem. BPS, the Bayesian Problem-Solver, introduced a similar approach to solving single-agent and adversarial graph search problems, yielding orders-of-magnitude improvement over traditional techniques. Initial efforts suggest that similar improvements will be realizable when applied to typical constraint-satisfaction scheduling problems.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cernoch, Antonin; Soubusta, Jan; Celechovska, Lucie
We report on experimental implementation of the optimal universal asymmetric 1->2 quantum cloning machine for qubits encoded into polarization states of single photons. Our linear-optical machine performs asymmetric cloning by partially symmetrizing the input polarization state of signal photon and a blank copy idler photon prepared in a maximally mixed state. We show that the employed method of measurement of mean clone fidelities exhibits strong resilience to imperfect calibration of the relative efficiencies of single-photon detectors used in the experiment. Reliable characterization of the quantum cloner is thus possible even when precise detector calibration is difficult to achieve.
Low-cost autonomous perceptron neural network inspired by quantum computation
NASA Astrophysics Data System (ADS)
Zidan, Mohammed; Abdel-Aty, Abdel-Haleem; El-Sadek, Alaa; Zanaty, E. A.; Abdel-Aty, Mahmoud
2017-11-01
Achieving low cost learning with reliable accuracy is one of the important goals to achieve intelligent machines to save time, energy and perform learning process over limited computational resources machines. In this paper, we propose an efficient algorithm for a perceptron neural network inspired by quantum computing composite from a single neuron to classify inspirable linear applications after a single training iteration O(1). The algorithm is applied over a real world data set and the results are outer performs the other state-of-the art algorithms.
Government Style as a Factor in Information Flow: Television Programming in Argentina, l979-l988.
ERIC Educational Resources Information Center
John, Jeffrey Alan
Noting that Argentina's recent history is particularly useful for analysis of the varying effects that differing government styles can have on a single mass communication system, a study compared Argentine (specifically Buenos Aires) television's 1979 programming schedule, prepared during a military dictatorship, with recent schedules prepared…
Algorithms for Scheduling and Network Problems
1991-09-01
time. We already know, by Lemma 2.2.1, that WOPT = O(log( mpU )), so if we could solve this integer program optimally we would be done. However, the...Folydirat, 15:177-191, 1982. [6] I.S. Belov and Ya. N. Stolin. An algorithm in a single path operations scheduling problem. In Mathematical Economics and
Alternative Schedules: What, How, and to What End?
ERIC Educational Resources Information Center
Wasley, Patricia A.
1997-01-01
The principal of a traditional high school in upstate New York asked faculty to reexamine the school schedule. After considerable debate, teachers decided to rotate class time so that no one suffered the afterlunch slump or day's-end rowdiness in a single class. Having gained confidence, a permanent teacher committee has added time blocks and…
Nersesova, L S; Petrosian, M S; Gazariants, M G; Mkrtchian, Z S; Meliksetian, G O; Pogosian, L G; Akopian, Zh I
2014-01-01
The comparative analysis of the rat liver and blood serum creatine kinase, alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase and purine nucleoside phosphorylase post-radiation activity levels after a total two-hour long single and fractional exposure of the animals to low-intensity 900 MHz frequency electromagnetic field showed that the most sensitive enzymes to the both schedules of radiation are the liver creatine kinase, as well as the blood serum creatine kinase and alkaline phosphatase. According to the comparative analysis of the dynamics of changes in the activity level of the liver and blood serum creatine kinase, alanine aminotransferase, aspartate aminotransferase and purine nucleoside phosphorylase, both single and fractional radiation schedules do not affect the permeability of a hepatocyte cell membrane, but rather cause changes in their energetic metabolism. The correlation analysis of the post-radiation activity level changes of the investigated enzymes did not reveal a clear relationship between them. The dynamics of post-radiation changes in the activity of investigated enzyme levels following a single and short-term fractional schedules of radiation did not differ essentially.
TRUFLO GONDOLA, USED WITH THE HUNTER 10 MOLDING MACHINE, OPERATES ...
TRUFLO GONDOLA, USED WITH THE HUNTER 10 MOLDING MACHINE, OPERATES THE SAME AS THE TWO LARGER TRUFLOS USED IN CONJUNCTION WITH THE TWO HUNTER 20S. EACH GONDOLA IS CONNECTED TO THE NEXT AND RIDES ON A SINGLE TRACK RAIL FROM MOLDING MACHINES THROUGH POURING AREAS CARRYING A MOLD AROUND TWICE BEFORE THE MOLD IS PUSHED OFF ONTO A VIBRATING SHAKEOUT CONVEYOR. - Southern Ductile Casting Company, Casting, 2217 Carolina Avenue, Bessemer, Jefferson County, AL
NASA Astrophysics Data System (ADS)
Kawai, Hiroyuki; Morimoto, Akihito; Higuchi, Kenichi; Sawahashi, Mamoru
This paper investigates the gain of inter-Node B macro diversity for a scheduled-based shared channel using single-carrier FDMA radio access in the Evolved UTRA (UMTS Terrestrial Radio Access) uplink based on system-level simulations. More specifically, we clarify the gain of inter-Node B soft handover (SHO) with selection combining at the radio frame length level (=10msec) compared to that for hard handover (HHO) for a scheduled-based shared data channel, considering the gains of key packet-specific techniques including channel-dependent scheduling, adaptive modulation and coding (AMC), hybrid automatic repeat request (ARQ) with packet combining, and slow transmission power control (TPC). Simulation results show that the inter-Node B SHO increases the user throughput at the cell edge by approximately 10% for a short cell radius such as 100-300m due to the diversity gain from a sudden change in other-cell interference, which is a feature specific to full scheduled-based packet access. However, it is also shown that the gain of inter-Node B SHO compared to that for HHO is small in a macrocell environment when the cell radius is longer than approximately 500m due to the gains from hybrid ARQ with packet combining, slow TPC, and proportional fairness based channel-dependent scheduling.
Analysis of Adhesively Bonded Ceramics Using an Asymmetric Wedge Test
2008-12-01
4 Figure 2. Average crack ...flexure specimen. The flaw, indicated by the white arrow, is a subsurface semi-elliptical crack induced by surface machining damage...strength-limiting orthogonal surface machining crack in an alumina flexure specimen coated with a single layer of film adhesive. The white arrow
Scheduling Software for Complex Scenarios
NASA Technical Reports Server (NTRS)
2006-01-01
Preparing a vehicle and its payload for a single launch is a complex process that involves thousands of operations. Because the equipment and facilities required to carry out these operations are extremely expensive and limited in number, optimal assignment and efficient use are critically important. Overlapping missions that compete for the same resources, ground rules, safety requirements, and the unique needs of processing vehicles and payloads destined for space impose numerous constraints that, when combined, require advanced scheduling. Traditional scheduling systems use simple algorithms and criteria when selecting activities and assigning resources and times to each activity. Schedules generated by these simple decision rules are, however, frequently far from optimal. To resolve mission-critical scheduling issues and predict possible problem areas, NASA historically relied upon expert human schedulers who used their judgment and experience to determine where things should happen, whether they will happen on time, and whether the requested resources are truly necessary.
Improved Scheduling Mechanisms for Synchronous Information and Energy Transmission.
Qin, Danyang; Yang, Songxiang; Zhang, Yan; Ma, Jingya; Ding, Qun
2017-06-09
Wireless energy collecting technology can effectively reduce the network time overhead and prolong the wireless sensor network (WSN) lifetime. However, the traditional energy collecting technology cannot achieve the balance between ergodic channel capacity and average collected energy. In order to solve the problem of the network transmission efficiency and the limited energy of wireless devices, three improved scheduling mechanisms are proposed: improved signal noise ratio (SNR) scheduling mechanism (IS2M), improved N-SNR scheduling mechanism (INS2M) and an improved Equal Throughput scheduling mechanism (IETSM) for different channel conditions to improve the whole network performance. Meanwhile, the average collected energy of single users and the ergodic channel capacity of three scheduling mechanisms can be obtained through the order statistical theory in Rayleig, Ricean, Nakagami- m and Weibull fading channels. It is concluded that the proposed scheduling mechanisms can achieve better balance between energy collection and data transmission, so as to provide a new solution to realize synchronous information and energy transmission for WSNs.
Improved Scheduling Mechanisms for Synchronous Information and Energy Transmission
Qin, Danyang; Yang, Songxiang; Zhang, Yan; Ma, Jingya; Ding, Qun
2017-01-01
Wireless energy collecting technology can effectively reduce the network time overhead and prolong the wireless sensor network (WSN) lifetime. However, the traditional energy collecting technology cannot achieve the balance between ergodic channel capacity and average collected energy. In order to solve the problem of the network transmission efficiency and the limited energy of wireless devices, three improved scheduling mechanisms are proposed: improved signal noise ratio (SNR) scheduling mechanism (IS2M), improved N-SNR scheduling mechanism (INS2M) and an improved Equal Throughput scheduling mechanism (IETSM) for different channel conditions to improve the whole network performance. Meanwhile, the average collected energy of single users and the ergodic channel capacity of three scheduling mechanisms can be obtained through the order statistical theory in Rayleig, Ricean, Nakagami-m and Weibull fading channels. It is concluded that the proposed scheduling mechanisms can achieve better balance between energy collection and data transmission, so as to provide a new solution to realize synchronous information and energy transmission for WSNs. PMID:28598395
A Multiple Ant Colony Metahuristic for the Air Refueling Tanker Assignment Problem
2002-03-01
Problem The tanker assignment problem can be modeled as a job shop scheduling problem ( JSSP ). The JSSP is made up of n jobs, composed of m ordered...points) to be processed on all the machines (tankers). The problem with using JSSP is that the tanker assignment problem has multiple objectives... JSSP will minimize the time it takes for all jobs, but this may take an inordinate number of tankers. Thus using JSSP alone is not necessarily a good
1988-08-01
J. R. (1986). Knowledge compilation: The general learning mechanism. In R. S . Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine learning...REPORTApproval for public release, 2b. DECLASSIFICATION I DOWNGRADING SCHEDULE d i s tri bution un limi ted 4. PERFORMING ORGANIZATION REPORT NUMBER( S ) S ...MONITORING ORGANIZATION REPORT NUMBER( S ) UPITT/LRDC/ONR/KUL-88-03 6a. NAME OF PERFORMING ORGANIZATION I6b. OFFICE SYMBOL 7a. NAME OF MONITORING
Special Inspector General for Iraq Reconstruction
2012-10-30
security facilities at Umm Qasr slipped three months.227 Because these upgrades might not be completed until aft er the OSC-I sites are transitioned...Basrah 11/18/2011 1/23/2013 1.1 0.6 0.5 Procure Electrical Coil Winding Machines Multiple 7/26/2012 11/23/2012 0.7 – 0.7 PHC Repairs in Central Iraq...complex in Baghdad continued to be the second-largest ongo- ing project. Once again, the schedule slipped . In April, USACE reported that it expected the