Adaptive Load-Balancing Algorithms Using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
In a distributed-computing environment, it is important to ensure that the processor workloads are adequately balanced. Among numerous load-balancing algorithms, a unique approach due to Dam and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three novel SBN-based load-balancing algorithms, and implement them on an SP2. A thorough experimental study with Poisson-distributed synthetic loads demonstrates that these algorithms are very effective in balancing system load while minimizing processor idle time. They also compare favorably with several other existing load-balancing techniques. Additional experiments performed with real data demonstrate that the SBN approach is effective in adaptive computational science and engineering applications where dynamic load balancing is extremely crucial.
Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.
Fast Optimal Load Balancing Algorithms for 1D Partitioning
Pinar, Ali; Aykanat, Cevdet
2002-12-09
One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.
LBR: Load Balancing Routing Algorithm for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Daabaj, Khaled; Dixon, Mike; Koziniec, Terry
2010-06-01
Homogeneous wireless sensor networks (WSNs) are organized using identical sensor nodes, but the nature of WSNs operations results in an imbalanced workload on gateway sensor nodes which may lead to a hot-spot or routing hole problem. The routing hole problem can be considered as a natural result of the tree-based routing schemes that are widely used in WSNs, where all nodes construct a multi-hop routing tree to a centralized root, e.g., a gateway or base station. For example, sensor nodes on the routing path and closer to the base station deplete their own energy faster than other nodes, or sensor nodes with the best link state to the base station are overloaded with traffic from the rest of the network and experience a faster energy depletion rate than their peers. Routing protocols for WSNs are reliability-oriented and their use of reliability metric to avoid unreliable links makes the routing scheme worse. However, none of these reliability oriented routing protocols explicitly uses load balancing in their routing schemes. Since improving network lifetime is a fundamental challenge of WSNs, we present, in this chapter, a novel, energy-wise, load balancing routing (LBR) algorithm that addresses load balancing in an energy efficient manner by maintaining a reliable set of parent nodes. This allows sensor nodes to quickly find a new parent upon parent loss due to the existing of node failure or energy hole. The proposed routing algorithm is tested using simulations and the results demonstrate that it outperforms the MultiHopLQI reliability based routing algorithm.
Dynamic load balance scheme for the DSMC algorithm
Li, Jin; Geng, Xiangren; Jiang, Dingwu; Chen, Jianqiang
2014-12-09
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, the total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.
A global plan policy for coherent co-operation in distributed dynamic load balancing algorithms
NASA Astrophysics Data System (ADS)
Kara, M.
1995-12-01
Distributed-controlled dynamic load balancing algorithms are known to have several advantages over centralized algorithms such as scalability, and fault tolerance. Distributed implies that the control is decentralized and that a copy of the algorithm (called a scheduler) is replicated on each host of the network. However, distributed control also contributes to the lack of global goals and lack of coherence. This paper presents a new algorithm called DGP (decentralized global plans) that addresses the problem of coherence and co-ordination in distributed dynamic load balancing algorithms. The DGP algorithm is based on a strategy called global plans (GP), and aims at maintaining all computational loads of a distributed system within a band called delta . The rationale for the design of DGP is to allow each scheduler to consider the actions of its peer schedulers. With this level of co-ordination, the schedulers can act more as a coherent team. This new approach first explicitly specifies a global goal and then designs a strategy around this global goal such that each scheduler (i) takes into account local decisions made by other schedulers; (ii) takes into account the effect of its local decisions on the overall system and (iii) ensures load balancing. An experimental evaluation of DGP with two other well known dynamic load balancing algorithms published in the literature shows that DGP performs consistently better. More significantly, the results indicate that the global plan approach provides a better framework for the design of distributed dynamic load balancing algorithms.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
A load-balance path selection algorithm in automatically swiched optical network (ASON)
NASA Astrophysics Data System (ADS)
Gao, Fei; Lu, Yueming; Ji, Yuefeng
2007-11-01
In this paper, a novel load-balance algorithm is proposed to provide an approach to optimized path selection in automatically swiched optical network (ASON). By using this algorithm, improved survivability and low congestion can be achieved. The static nature of current routing algorithms, such as OSPF or IS-IS, has made the situation worse since the traffic is concentrated on the "least-cost" paths which causes the congestion for some links while leaving other links lightly loaded. So, the key is to select suitable paths to balance the network load to optimize network resource utilization and traffic performance. We present a method to provide the capability to control traffic engineering so that the carriers can define their own strategies for optimizations and apply them to path selection for dynamic load balancing. With considering load distribution and topology information, capacity utilization factor is introduced into Dijkstra (shortest path selection) for path selection to achieve balancing traffic over network. Routing simulations have been done over mesh networks to compare the two different algorithms. With the simulation results, a conclusion can be made on the performance of different algorithms.
A parallel dynamic load balancing algorithm for 3-D adaptive unstructured grids
NASA Technical Reports Server (NTRS)
Vidwans, A.; Kallinderis, Y.; Venkatakrishnan, V.
1993-01-01
Adaptive local grid refinement and coarsening results in unequal distribution of workload among the processors of a parallel system. A novel method for balancing the load in cases of dynamically changing tetrahedral grids is developed. The approach employs local exchange of cells among processors in order to redistribute the load equally. An important part of the load balancing algorithm is the method employed by a processor to determine which cells within its subdomain are to be exchanged. Two such methods are presented and compared. The strategy for load balancing is based on the Divide-and-Conquer approach which leads to an efficient parallel algorithm. This method is implemented on a distributed-memory MIMD system.
Load Balancing Scientific Applications
Pearce, Olga Tkachyshyn
2014-12-01
The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one at the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.
A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
Multidimensional spectral load balancing
Hendrickson, B.; Leland, R.
1993-01-01
We describe an algorithm for the static load balancing of scientific computations that generalizes and improves upon spectral bisection. Through a novel use of multiple eigenvectors, our new spectral algorithm can divide a computation into 4 or 8 pieces at once. These multidimensional spectral partitioning algorithms generate balanced partitions that have lower communication overhead and are less expensive to compute than those produced by spectral bisection. In addition, they automatically work to minimize message contention on a hypercube or mesh architecture. These spectral partitions are further improved by a multidimensional generalization of the Kernighan-Lin graph partitioning algorithm. Results on several computational grids are given and compared with other popular methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng
2006-09-01
The paper considers the problem of establishing robust routes for multi-granularity connection requests in traffic-grooming WDM mesh networks and proposes a novel Valiant Load-Balanced robust routing scheme for the hose uncertain model. Our objective is to minimize the total network cost when assuring robust routing for all possible multi-granularity connection requests under the hose model. Since the optimization problem is recently shown to be NP-hard, two heuristic algorithms are proposed and compared. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimal hop first) is proposed. We evaluate MHF by Valiant Load-Balanced robust routing with the traditional traffic-grooming algorithm by computer simulation.
NASA Astrophysics Data System (ADS)
Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng
2013-07-01
High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
Choi, D.S.; Hasegawa, Jun; Kim, C.S.
1995-12-31
Network reconfiguration in distribution system is realized by changing the status of sectionalizing switches, and is usually done for loss reducing or for load balancing in the system. This paper presents a new method which applies a genetic algorithm for determining which sectionalizing switch to operate in order to solve the distribution system loss minimization reconfiguration problem. In addition, the proposed method introduces a new limited life feature for performing natural selection of individuals. Simulations were carried out in order to verify the effectiveness of the proposed method. These results showed that the proposed method is effective in dealing with the problems of homogeneity and genetic drift associated with the population in the initial state.
Multidimensional spectral load balancing
Hendrickson, Bruce A.; Leland, Robert W.
1996-12-24
A method of and apparatus for graph partitioning involving the use of a plurality of eigenvectors of the Laplacian matrix of the graph of the problem for which load balancing is desired. The invention is particularly useful for optimizing parallel computer processing of a problem and for minimizing total pathway lengths of integrated circuits in the design stage.
Dynamic load balancing of applications
Wheat, Stephen R.
1997-01-01
An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated.
Dynamic load balancing of applications
Wheat, S.R.
1997-05-13
An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers is disclosed. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated. 13 figs.
Libra: Scalable Load Balance Analysis
2009-09-16
Libra is a tool for scalable analysis of load balance data from all processes in a parallel application. Libra contains an instrumentation module that collects model data from parallel applications and a parallel compression mechanism that uses distributed wavelet transforms to gather load balance model data in a scalable fashion. Data is output to files, and these files can be viewed in a GUI tool by Libra users. The GUI tool associates particular load balance data with regions for code, emabling users to view the load balance properties of distributed "slices" of their application code.
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
Libra: Scalable Load Balance Analysis
Energy Science and Technology Software Center (ESTSC)
2009-09-16
Libra is a tool for scalable analysis of load balance data from all processes in a parallel application. Libra contains an instrumentation module that collects model data from parallel applications and a parallel compression mechanism that uses distributed wavelet transforms to gather load balance model data in a scalable fashion. Data is output to files, and these files can be viewed in a GUI tool by Libra users. The GUI tool associates particular load balancemore » data with regions for code, emabling users to view the load balance properties of distributed "slices" of their application code.« less
Isorropia Partitioning and Load Balancing Package
Energy Science and Technology Software Center (ESTSC)
2006-09-01
Isorropia is a partitioning and load balancing package which interfaces with the Zoltan library. Isorropia can accept input objects such as matrices and matrix-graphs, and repartition/redistribute them into a better data distribution on parallel computers. Isorropia is primarily an interface package, utilizing graph and hypergraph partitioning algorithms that are in the Zoltan library which is a third-party library to Tilinos.
Load Balancing Sequences of Unstructured Adaptive Grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1997-01-01
Mesh adaption is a powerful tool for efficient unstructured grid computations but causes load imbalance on multiprocessor systems. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. This paper makes several important additions to our previous work. First, a new remapping cost model is presented and empirically validated on an SP2. Next, our load balancing strategy is applied to sequences of dynamically adapted unstructured grids. Results indicate that our framework is effective on many processors for both steady and unsteady problems with several levels of adaption. Additionally, we demonstrate that a coarse starting mesh produces high quality load balancing, at a fraction of the cost required for a fine initial mesh. Finally, we show that the data remapping overhead can be significantly reduced by applying our heuristic processor reassignment algorithm.
Design of dynamic load-balancing tools for parallel applications
Devine, K.D.; Hendrickson, B.A.; Boman, E.G.; St. John, M.; Vaughan, C.T.
2000-01-03
The design of general-purpose dynamic load-balancing tools for parallel applications is more challenging than the design of static partitioning tools. Both algorithmic and software engineering issues arise. The authors have addressed many of these issues in the design of the Zoltan dynamic load-balancing library. Zoltan has an object-oriented interface that makes it easy to use and provides separation between the application and the load-balancing algorithms. It contains a suite of dynamic load-balancing algorithms, including both geometric and graph-based algorithms. Its design makes it valuable both as a partitioning tool for a variety of applications and as a research test-bed for new algorithmic development. In this paper, the authors describe Zoltan's design and demonstrate its use in an unstructured-mesh finite element application.
Static load balancing for CFD distributed simulations
Chronopoulos, A T; Grosu, D; Wissink, A; Benche, M
2001-01-26
The cost/performance ratio of networks of workstations has been constantly improving. This trend is expected to continue in the near future. The aggregate peak rate of such systems often matches or exceeds the peak rate offered by the fastest parallel computers. This has motivated research towards using a network of computers, interconnected via a fast network (cluster system) or a simple Local Area Network (LAN) (distributed system), for high performance concurrent computations. Some of the important research issues arise such as (1) Optimal problem partitioning and virtual interconnection topology mapping; (2) Optimal execution scheduling and load balancing. CFD codes have been efficiently implemented on homogeneous parallel systems in the past. In particular, the helicopter aerodynamics CFD code TURNS has been implemented with MPI on the IBM SP with parallel relaxation and Krylov iterative methods used in place of more traditional recursive algorithms to enhance performance. In this implementation the space domain is divided into equal subdomain which are mapped to the processors. We consider the implementation of TURNS on a LAN of heterogeneous workstations. In order to deal with the problem of load balancing due to the different processor speeds we propose a suboptimal algorithm of dividing the space domain into unequal subdomains and assign them to the different computers. The algorithm can apply to other CFD applications. We used our algorithm to schedule TURNS on a network of workstations and obtained significantly better results.
Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2014-01-01
An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.
Dynamics of load balancing with constraints
NASA Astrophysics Data System (ADS)
Suzuki, Hideyuki
2014-10-01
In this paper, we consider a centralized strategy for scheduling charging patterns of electrical vehicles and other batteries in power grids. We formulate it as a load balancing problem with constraints, which tries to distribute the charging loads both spatially and temporally. We show that a variant of herding system can be applied to load balancing.
Dynamics of load balancing with constraints
NASA Astrophysics Data System (ADS)
Suzuki, Hideyuki
2014-09-01
In this paper, we consider a centralized strategy for scheduling charging patterns of electrical vehicles and other batteries in power grids. We formulate it as a load balancing problem with constraints, which tries to distribute the charging loads both spatially and temporally. We show that a variant of herding system can be applied to load balancing.
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
A novel strategy for load balancing of distributed medical applications.
Logeswaran, Rajasvaran; Chen, Li-Choo
2012-04-01
Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions. PMID:20703702
Improving load balance with flexibly assignable tasks
Pinar, Ali; Hendrickson, Bruce
2003-09-09
In many applications of parallel computing, distribution ofthe data unambiguously implies distribution of work among processors. Butthere are exceptions where some tasks can be assigned to one of severalprocessors without altering the total volume of communication. In thispaper, we study the problem of exploiting this flexibility in assignmentof tasks to improve load balance. We first model the problem in terms ofnetwork flow and use combinatorial techniques for its solution. Ourparametric search algorithms use maximum flow algorithms for probing on acandidate optimal solution value. We describe two algorithms to solve theassignment problem with \\logW_T and vbar P vbar probe calls, w here W_Tand vbar P vbar, respectively, denote the total workload and number ofproce ssors. We also define augmenting paths and cuts for this problem,and show that anyalgorithm based on augmenting paths can be used to findan optimal solution for the task assignment problem. We then consideracontinuous version of the problem, and formulate it as a linearlyconstrained optimization problem, i.e., \\min\\|Ax\\|_\\infty,\\; {\\rms.t.}\\;Bx=d. To avoid solving an intractable \\infty-norm optimization problem,we show that in this case minimizing the 2-norm is sufficient to minimizethe \\infty-norm, which reduces the problem to the well-studiedlinearly-constrained least squares problem. The continuous version of theproblem has the advantage of being easily amenable to parallelization.Our experiments with molecular dynamics and overlapped domaindecomposition applications proved the effectiveness of our methods withsignificant improvements in load balance. We also discuss how ourtechniques can be enhanced for heterogeneous systems.
Load Balancing in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Zhu, Yingwu
In this chapter we start by addressing the importance and necessity of load balancing in structured P2P networks, due to three main reasons. First, structured P2P networks assume uniform peer capacities while peer capacities are heterogeneous in deployed P2P networks. Second, resorting to pseudo-uniformity of the hash function used to generate node IDs and data item keys leads to imbalanced overlay address space and item distribution. Lastly, placement of data items cannot be randomized in some applications (e.g., range searching). We then present an overview of load aggregation and dissemination techniques that are required by many load balancing algorithms. Two techniques are discussed including tree structure-based approach and gossip-based approach. They make different tradeoffs between estimate/aggregate accuracy and failure resilience. To address the issue of load imbalance, three main solutions are described: virtual server-based approach, power of two choices, and address-space and item balancing. While different in their designs, they all aim to improve balance on the address space and data item distribution. As a case study, the chapter discusses a virtual server-based load balancing algorithm that strives to ensure fair load distribution among nodes and minimize load balancing cost in bandwidth. Finally, the chapter concludes with future research and a summary.
Dynamic Load Balancing for Computational Plasticity on Parallel Computers
NASA Technical Reports Server (NTRS)
Pramono, Eddy; Simon, Horst
1994-01-01
The simulation of the computational plasticity on a complex structure remains a formidable computational task, especially when a highly nonlinear, complex material model was used. It appears that the computational requirements for a such problem can only be satisfied by massively parallel architectures. In order to effectively harness the tremendous computational power provided by such architectures, it is imperative to investigate and to study the algorithmic and implementation issues pertaining to dynamic load balancing for computational plasticity on a highly parallel, distributed-memory, multiple-instruction, multiple-data computers. This paper will measure the effectiveness of the algorithms developed in handling the dynamic load balancing.
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Balancing loads. 23.421 Section 23.421 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Horizontal Stabilizing and Balancing Surfaces § 23.421...
NASA Technical Reports Server (NTRS)
Hailperin, M.
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.
An Evaluation of the HVAC Load Potential for Providing Load Balancing Service
Lu, Ning
2012-09-30
This paper investigates the potential of providing aggregated intra-hour load balancing services using heating, ventilating, and air-conditioning (HVAC) systems. A direct-load control algorithm is presented. A temperature-priority-list method is used to dispatch the HVAC loads optimally to maintain consumer-desired indoor temperatures and load diversity. Realistic intra-hour load balancing signals were used to evaluate the operational characteristics of the HVAC load under different outdoor temperature profiles and different indoor temperature settings. The number of HVAC units needed is also investigated. Modeling results suggest that the number of HVACs needed to provide a {+-}1-MW load balancing service 24 hours a day varies significantly with baseline settings, high and low temperature settings, and the outdoor temperatures. The results demonstrate that the intra-hour load balancing service provided by HVAC loads meet the performance requirements and can become a major source of revenue for load-serving entities where the smart grid infrastructure enables direct load control over the HAVC loads.
A comparative analysis of static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf; Bokhari, Shahid H.; Saltz, Joel H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but suboptimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the three strategies.
Performance tradeoffs in static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.
An analysis on the load balancing strategies in wavelength-routed optical networks
NASA Astrophysics Data System (ADS)
Liu, Kai; Fu, Minglei; Le, Zichun
2008-11-01
Routing and wavelength assignment (RWA) is one of the key issues in the wavelength-routed optical networks. Although some RWA algorithms have been well performed to meet the need of certain networks requirement, they usually neglect the performance of the whole networks, especially the load balancing of the whole networks. This is quite likely to lead to some links bearing excessive ligthpaths and traffic load, while other links being at an idle state. In this paper, the load distribution vector ( LDV ) is introduced to describe the links load of the networks firstly. Then by means of minimizing the LDV of the networks, the load balancing of the whole networks is tried to improve. Based on this, a heuristic load balancing (HLB) strategy is presented. Moreover, a novel RWA algorithm adopting the heuristic load balancing strategy is developed, as well as two other RWA algorithms adopting other load balancing strategies. At last, the simulations of the three RWA algorithms with different load balancing strategies are done for comparison on the basis of both the regular topology and the irregular topology networks. The simulation results show that the key performance parameters such as the average variance of links, the maximum link load and the number of established lightpath are improved by means of our novel RWA algorithm with the heuristic load balancing strategy.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrence Livermore National Laboratory. (authors)
Valiant load-balanced robust routing under hose model for WDM mesh networks
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng
2006-09-01
In this paper, we propose Valiant Load-Balanced robust routing scheme for WDM mesh networks under the model of polyhedral uncertainty (i.e., hose model), and the proposed routing scheme is implemented with traffic grooming approach. Our Objective is to maximize the hose model throughput. A mathematic formulation of Valiant Load-Balanced robust routing is presented and three fast heuristic algorithms are also proposed. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimizing hop first) is proposed. We compare the three heuristic algorithms with the VPN tree under the hose model. Finally we demonstrate in the simulation results that MHF with Valiant Load-Balanced robust routing scheme outperforms the traditional traffic-grooming algorithm in terms of the throughput for the uniform/non-uniform traffic matrix under the hose model.
Scalable load-balance measurement for SPMD codes
Gamblin, T; de Supinski, B R; Schulz, M; Fowler, R; Reed, D
2008-08-05
Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transform and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.
Parallel tetrahedral mesh adaptation with dynamic load balancing
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
2000-06-28
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.
Load balancing fictions, falsehoods and fallacies
HENDRICKSON,BRUCE A.
2000-05-30
Effective use of a parallel computer requires that a calculation be carefully divided among the processors. This load balancing problem appears in many guises and has been a fervent area of research for the past decade or more. Although great progress has been made, and useful software tools developed, a number of challenges remain. It is the conviction of the author that these challenges will be easier to address if programmers first come to terms with some significant shortcomings in their current perspectives. This paper tries to identify several areas in which the prevailing point of view is either mistaken or insufficient. The goal is to motivate new ideas and directions for this important field.
Performance Analysis and Portability of the PLUM Load Balancing System
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1998-01-01
The ability to dynamically adapt an unstructured mesh is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive numerical computations in a message-passing environment. PLUM requires that all data be globally redistributed after each mesh adaption to achieve load balance. We present an algorithm for minimizing this remapping overhead by guaranteeing an optimal processor reassignment. We also show that the data redistribution cost can be significantly reduced by applying our heuristic processor reassignment algorithm to the default mapping of the parallel partitioner. Portability is examined by comparing performance on a SP2, an Origin2000, and a T3E. Results show that PLUM can be successfully ported to different platforms without any code modifications.
Evaluating Zoltan for Static Load Balancing on BlueGene Architectures
Kumfert, G
2007-11-15
The purpose of this TechBase was to evaluate the Zoltan load-balancing library from Sandia National Laboratories as a possible replacement for ParMetis, which had been the load balancer of choice for nearly a decade but does not scale to the full 64,000 processors of BlueGene/L. This evaluation was successful in producing a clear result, but the result was unfortunately negative. Although Zoltan presents a collection load-balancing algorithms, none were able to meet or exceed the combined scalability and quality of ParMetis on representative datasets.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
NASA Astrophysics Data System (ADS)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.
Parallel Processing of Adaptive Meshes with Load Balancing
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.
Exploiting Flexibly Assignable Work to Improve Load Balance
Pinar, Ali; Hendrickson, Bruce
2002-12-09
In many applications of parallel computing, distribution of the data unambiguously implies distribution of work among processors. But there are exceptions where some tasks can be assigned to one of several processors without altering the total volume of communication. In this paper, we study the problem of exploiting this flexibility in assignment of tasks to improve load balance. We first model the problem in terms of network flow and use combinatorial techniques for its solution. Our parametric search algorithms use maximum flow algorithms for probing on a candidate optimal solution value. We describe two algorithms to solve the assignment problem with log W{sub T} and |P| probe calls, where W{sub T} and |P|, respectively, denote the total workload and number of processors. We also define augmenting paths and cuts for this problem, and show that any algorithm based on augmenting paths can be used to find an optimal solution for the task assignment problem. We then consider a continuous version of the problem, and formulate it as a linearly constrained optimization problem, i.e., min ||Ax||{sub {infinity}}, s.t. Bx = d. To avoid solving an intractable {infinity}-norm optimization problem, we show that in this case minimizing the 2-norm is sufficient to minimize the {infinity}-norm, which reduces the problem to the well-studied linearly-constrained least squares problem. The continuous version of the problem has the advantage of being easily amenable to parallelization.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at
Neural Network Algorithm for Particle Loading
J. L. V. Lewandowski
2003-04-25
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Dynamic Load Balancing for Adaptive Meshes using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often dynamic in the sense that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing inter-processor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view across processors. In this paper, we compare a novel load balancer that utilizes symmetric broadcast networks (SBN) to a successful global load balancing environment (PLUM) created to handle adaptive unstructured applications. Our experimental results on the IBM SP2 demonstrate that performance of the proposed SBN load balancer is comparable to results achieved under PLUM.
High-Performance Kinetic Plasma Simulations with GPUs and load balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Ahmadi, Narges; Abbott, Stephen; Lin, Liwei; Wang, Liang; Bhattacharjee, Amitava; Fox, Will
2014-10-01
We will describe the Plasma Simulation Code (PSC), a modern particle-in-cell code with GPU support and dynamic load balancing capabilities. For 2-d problems, we achieve a speed-up of up to 6 × on the Cray XK7 ``Titan'' using its GPUs over the well-known VPIC code, which has been optimized for conventional CPUs with SIMD support. Our load-balancing algorithm employs a space-filling Hilbert-Peano curve to maintain locality and has shown to keep the load balanced within approximately 10% in production runs which otherwise slow down up to 5 × with only static load balancing. PSC is based on the
Graph-balancing algorithms for average consensus over directed networks
NASA Astrophysics Data System (ADS)
Fan, Yuan; Han, Runzhe; Qiu, Jianbin
2016-01-01
Consensus strategies find extensive applications in coordination of robot groups and decision-making of agents. Since balanced graph plays an important role in the average consensus problem and many other coordination problems for directed communication networks, this work explores the conditions and algorithms for the digraph balancing problem. Based on the analysis of graph cycles, we prove that a digraph can be balanced if and only if the null space of its incidence matrix contains positive vectors. Then, based on this result and the corresponding analysis, two weight balance algorithms have been proposed, and the conditions for obtaining a unique balanced solution and a set of analytical results on weight balance problems have been introduced. Then, we point out the relationship between the weight balance problem and the features of the corresponding underlying Markov chain. Finally, two numerical examples are presented to verify the proposed algorithms.
A complete algorithm for fixture loading
Yu, K.; Goldberg, K.Y.
1998-11-01
A fixture is a device for locating and holding parts. Since the initial position and orientation of a part may be uncertain, the act of loading the part into the fixture must compensate for this uncertainty. Machinists often refer to the 3-2-1 rule: place the part onto 3-point contact with a horizontal support plane, slide the part along this plane into 2-point contact with the fixture, then translate along this edge until a 1-point contact uniquely locates the part. This rule of thumb implicitly assumes both sensing and compliance: applied forces change as contacts are detected. In this paper, the authors geometrically formalize robotic fixture loading as a sensor-based compliant assembly problem and give a complete planning algorithm. They consider the class of modular fixtures that use three locators and one clamp (Brost and Goldberg 1996), and discuss a class of robot commands that cause the part to slide and rotate in the support plane. Sensing is achieved with binary contact sensors on each locator; compliance is achieved with a passive spring-loaded mechanism at the robot and effector. The authors extend the theory of sensor-based compliant motion planning to generalized polygonal C-spaces, and give a complete planning algorithm: it is guaranteed to find a loading plan when one exists and to return a negative report otherwise. The authors report on experiments using the resulting plans. Finally, they use this formalization to prove a sufficient condition for the 3-2-1 rule.
A High Performance Load Balance Strategy for Real-Time Multicore Systems
Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing
2014-01-01
Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
Implementation of GAMMON - An efficient load balancing strategy for a local computer system
NASA Technical Reports Server (NTRS)
Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.
1989-01-01
GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.
Balancing Loads Among Robotic-Manipulator Arms
NASA Technical Reports Server (NTRS)
Kreutz, Kenneth K.; Lokshin, Anatole
1990-01-01
Paper presents rigorous mathematical approach to control of multiple robot arms simultaneously grasping one object. Mathematical development focuses on relationship between ability to control degrees of freedom of configuration and ability to control forces within grasped object and robot arms. Understanding of relationship leads to practical control schemes distributing load more equitably among all arms while grasping object with proper nondamaging forces.
MDSLB: A new static load balancing method for parallel molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Wu, Yun-Long; Xu, Xin-Hai; Yang, Xue-Jun; Zou, Shun; Ren, Xiao-Guang
2014-02-01
Large-scale parallelization of molecular dynamics simulations is facing challenges which seriously affect the simulation efficiency, among which the load imbalance problem is the most critical. In this paper, we propose, a new molecular dynamics static load balancing method (MDSLB). By analyzing the characteristics of the short-range force of molecular dynamics programs running in parallel, we divide the short-range force into three kinds of force models, and then package the computations of each force model into many tiny computational units called “cell loads”, which provide the basic data structures for our load balancing method. In MDSLB, the spatial region is separated into sub-regions called “local domains”, and the cell loads of each local domain are allocated to every processor in turn. Compared with the dynamic load balancing method, MDSLB can guarantee load balance by executing the algorithm only once at program startup without migrating the loads dynamically. We implement MDSLB in OpenFOAM software and test it on TianHe-1A supercomputer with 16 to 512 processors. Experimental results show that MDSLB can save 34%-64% time for the load imbalanced cases.
PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.
Migration impact on load balancing - an experience on Amoeba
Zhu, W.; Socko, P.
1996-12-31
Load balancing has been extensive study by simulation, positive results were received in most of the researches. With the increase of the availability oftlistributed systems, a few experiments have been carried out on different systems. These experimental studies either depend on task initiation or task initiation plus task migration. In this paper, we present the results of an 0 study of load balancing using a centralizedpolicy to manage the load on a set of processors, which was carried out on an Amoeba system which consists of a set of 386s and linked by 10 Mbps Ethernet. The results on one hand indicate the necessity of a load balancing facility for a distributed system. On the other hand, the results question the impact of using process migration to increase system performance under the configuration used in our experiments.
Incorporating Load Balancing Spatial Analysis Into Xml-Based Webgis
NASA Astrophysics Data System (ADS)
Huang, H.
2012-07-01
This article aims to introduce load balancing spatial analysis into XML-based WebGIS. In contrast to other approaches that implement spatial queries and analyses solely on server or browser sides, load balancing spatial analysis carries out spatial analysis on either the server or the browser sides depending on the execution costs (i.e., network transmission costs and computational costs). In this article, key elements of load balancing middlewares are investigated, and relevant solution is proposed. The comparison with server-side solution, browse-side solution, and our former solution shows that the proposed solution can optimize the execution of spatial analysis, greatly ease the network transmission load between the server and the browser sides, and therefore lead to a better performance. The proposed solution enables users to access high-performance spatial analysis simply via a web browser.
Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines
NASA Technical Reports Server (NTRS)
1999-01-01
Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.
Load Balancing Unstructured Adaptive Grids for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1996-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.
BALANCING THE LOAD: A VORONOI BASED SCHEME FOR PARALLEL COMPUTATIONS
Steinberg, Elad; Yalinewich, Almog; Sari, Re'em; Duffell, Paul
2015-01-01
One of the key issues when running a simulation on multiple CPUs is maintaining a proper load balance throughout the run and minimizing communications between CPUs. We propose a novel method of utilizing a Voronoi diagram to achieve a nearly perfect load balance without the need of any global redistributions of data. As a show case, we implement our method in RICH, a two-dimensional moving mesh hydrodynamical code, but it can be extended trivially to other codes in two or three dimensions. Our tests show that this method is indeed efficient and can be used in a large variety of existing hydrodynamical codes.
Fuzzy Pool Balance: An algorithm to achieve a two dimensional balance in distribute storage systems
NASA Astrophysics Data System (ADS)
Wu, Wenjing; Chen, Gang
2014-06-01
The limitation of scheduling modules and the gradual addition of disk pools in distributed storage systems often result in imbalances among their disk pools in terms of both disk usage and file count. This can cause various problems to the storage system such as single point of failure, low system throughput and imbalanced resource utilization and system loads. An algorithm named Fuzzy Pool Balance (FPB) is proposed here to solve this problem. The input of FPB is the current file distribution among disk pools and the output is a file migration plan indicating what files are to be migrated to which pools. FPB uses an array to classify the files by their sizes. The file classification array is dynamically calculated with a defined threshold named Tmax that defines the allowed pool disk usage deviations. File classification is the basis of file migration. FPB also defines the Immigration Pool (IP) and Emigration Pool (EP) according to the pool disk usage and File Quantity Ratio (FQR) that indicates the percentage of each category of files in each disk pool, so files with higher FQR in an EP will be migrated to IP(s) with a lower FQR of this file category. To verify this algorithm, we implemented FPB on an ATLAS Tier2 dCache production system. The results show that FPB can achieve a very good balance in both free space and file counts, and adjusting the threshold value Tmax and the correction factor to the average FQR can achieve a tradeoff between free space and file count.
A novel load balancing method for hierarchical federation simulation system
NASA Astrophysics Data System (ADS)
Bin, Xiao; Xiao, Tian-yuan
2013-07-01
In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.
A location selection policy of live virtual machine migration for power saving and load balancing.
Zhao, Jia; Ding, Yan; Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24348165
A Location Selection Policy of Live Virtual Machine Migration for Power Saving and Load Balancing
Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24348165
On delay adjustment for dynamic load balancing in distributed virtual environments.
Deng, Yunhua; Lau, Rynson W H
2012-04-01
Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead. PMID:22402679
Evaluation of delay performance in valiant load-balancing network
NASA Astrophysics Data System (ADS)
Yu, Yingdi; Jin, Yaohui; Cheng, Hong; Gao, Yu; Sun, Weiqiang; Guo, Wei; Hu, Weisheng
2007-11-01
Network traffic grows in an unpredictable way, which forces network operators to over-provision their backbone network in order to meet the increasing demands. In the consideration of new users, applications and unexpected failures, the utilization is typically below 30% [1]. There are two methods aimed to solve this problem. The first one is to adjust link capacity with the variation of traffic. However in optical network, rapid signaling scheme and large buffer is required. The second method is to use the statistical multiplexing function of IP routers connected point-to-point by optical links to counteract the effect brought by traffic's variation [2]. But the routing mechanism would be much more complex, and introduce more overheads into backbone network. To exert the potential of network and reduce its overhead, the use of Valiant Load-balancing for backbone network has been proposed in order to enhance the utilization of the network and to simplify the routing process. Raising the network utilization and improving throughput would inevitably influence the end-to-end delay. However, the study on delays of Load-balancing is lack. In the work presented in this paper, we study the delay performance in Valiant Load-balancing network, and isolate the queuing delay for modeling and detail analysis. We design the architecture of a switch with the ability of load-balancing for our simulation and experiment, and analyze the relationship between switch architecture and delay performance.
Work Stealing and Persistence-based Load Balancers for Iterative Overdecomposed Applications
Lifflander, Jonathan; Krishnamoorthy, Sriram; Kale, Laxmikant
2012-06-18
Applications often involve iterative execution of identical or slowly evolving calculations. Such applications require good initial load balance coupled with efficient periodic rebalancing. In this paper, we consider the design and evaluation of two distinct approaches to addressing this challenge: persistence-based load balancing and work stealing. The work to be performed is overdecomposed into tasks, enabling automatic rebalancing by the middleware. We present a hierarchical persistence-based rebalancing algorithm that performs localized incremental rebalancing. We also present an active-message-based retentive work stealing algorithm optimized for iterative applications on distributed memory machines. These are shown to incur low overheads and achieve over 90% efficiency on 76,800 cores.
Computational evaluation of load carriage effects on gait balance stability.
Mummolo, Carlotta; Park, Sukyung; Mangialardi, Luigi; Kim, Joo H
2016-08-01
Evaluating the effects of load carriage on gait balance stability is important in various applications. However, their quantification has not been rigorously addressed in the current literature, partially due to the lack of relevant computational indices. The novel Dynamic Gait Measure (DGM) characterizes gait balance stability by quantifying the relative effects of inertia in terms of zero-moment point, ground projection of center of mass, and time-varying foot support region. In this study, the DGM is formulated in terms of the gait parameters that explicitly reflect the gait strategy of a given walking pattern and is used for computational evaluation of the distinct balance stability of loaded walking. The observed gait adaptations caused by load carriage (decreased single support duration, inertia effects, and step length) result in decreased DGM values (p < 0.0001), which indicate that loaded walking motions are more statically stable compared with the unloaded normal walking. Comparison of the DGM with other common gait stability indices (the maximum Floquet multiplier and the margin of stability) validates the unique characterization capability of the DGM, which is consistently informative of the presence of the added load. PMID:26691823
Development of Load Balancing Systems in a Parallel MRP System
NASA Astrophysics Data System (ADS)
Tsukishima, Takahiro; Sato, Masahiro; Onari, Hisashi
The application of parallel computing system to MRP (Material Requirements Planning) is essential to achieve a real-time demand forecasting for a whole Supply Chain which consists of Multiple enterprises near future. The MRP using loosely connected multi-computer system is examined here. New methods of synchronization, load balancing and data access are required to keep high parallel efficiency increasing PE’s(Processing Elements). In this paper load balancing and data access methods are proposed. The prototype system can keep 96% parallel efficiency for the MRP with 120, 000 items on the 6 PE’s structure and can be robust against unbalanced load. The processing speed increases in liner fashion.
Preference based load balancing as an outpatient appointment scheduling aid.
Premarathne, Uthpala Subodhani; Han, Fengling; Khalil, Ibrahim; Tari, Zahir
2013-01-01
Load balancing is a performance improvement aid in various applications of distributed systems. In this paper we propose a preference based load balancing strategy as a scheduling aid in an outpatient clinic of an online medical consultation system. The performance objectives are to maximizing throughout and minimizing waiting time. Patients will provide a standard set of preferences prior to scheduling an appointment. The preferences are rated on to a scale and each service request will have a respective preference score. The available doctors will also be classified into classes based on their clinical expertise and the nature of the past diagnosis and the types of patients consulted. The preference scores will then be mapped on to each class and the appointment will be scheduled. The proposed scheme was modeled as a queuing system in Matlab. Matlab SimEvents library modules were used for constructing the model. Performance was analysed based on the average waiting time and utilization. The results revealed that the preference based load balancing scheme markedly reduce the waiting time and significantly improve the utilization under different load conditions. PMID:24109933
Dynamic Load Balancing on Single- and Multi-GPU Systems
Chen, Long; Villa, Oreste; Krishnamoorthy, Sriram; Gao, Guang R.
2010-04-19
The computational power provided by many-core graphics processing units (GPUs) has been exploited in many applications. The programming techniques supported and employed on these GPUs are not sufficient to address problems exhibiting irregular, and unbalanced workload. The problem is exacerbated when trying to effectively exploit multiple GPUs, which are commonly available in many modern systems. In this paper, we propose a task-based dynamic load-balancing solution for single- and multi-GPU systems. The solution allows load balancing at a finer granularity than what is supported in existing APIs such as NVIDIA’s CUDA. We evaluate our approach using both micro-benchmarks and a molecular dynamics application that exhibits significant load imbalance. Experimental results with a single-GPU configuration show that our fine-grained task solution can utilize the hardware more efficiently than the CUDA scheduler for unbalanced workload. On multi-GPU systems, our solution achieves near-linear speedup, load balance, and significant performance improvement over techniques based on standard CUDA APIs.
Towards a Load Balancing Middleware for Automotive Infotainment Systems
NASA Astrophysics Data System (ADS)
Khaluf, Yara; Rettberg, Achim
In this paper a middleware for distributed automotive systems is developed. The goal of this middleware is to support the load bal- ancing and service optimization in automotive infotainment and entertainment systems. These systems provide navigation, telecommunication, Internet, audio/video and many other services where a kind of dynamic load balancing mechanisms in addition to service quality optimization mechanisms will be applied by the developed middleware in order to improve the system performance and also at the same time improve the quality of services if possible.
Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1997-01-01
Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.
Monitoring dynamic loads on wind tunnel force balances
NASA Technical Reports Server (NTRS)
Ferris, Alice T.; White, William C.
1989-01-01
Two devices have been developed at NASA Langley to monitor the dynamic loads incurred during wind-tunnel testing. The Balance Dynamic Display Unit (BDDU), displays and monitors the combined static and dynamic forces and moments in the orthogonal axes. The Balance Critical Point Analyzer scales and sums each normalized signal from the BDDU to obtain combined dynamic and static signals that represent the dynamic loads at predefined high-stress points. The display of each instrument is a multiplex of six analog signals in a way that each channel is displayed sequentially as one-sixth of the horizontal axis on a single oscilloscope trace. Thus this display format permits the operator to quickly and easily monitor the combined static and dynamic level of up to six channels at the same time.
Data Partitioning and Load Balancing in Parallel Disk Systems
NASA Technical Reports Server (NTRS)
Scheuermann, Peter; Weikum, Gerhard; Zabback, Peter
1997-01-01
Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible waves, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces.
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
Economic load dispatch using improved gravitational search algorithm
NASA Astrophysics Data System (ADS)
Huang, Yu; Wang, Jia-rong; Guo, Feng
2016-03-01
This paper presents an improved gravitational search algorithm(IGSA) to solve the economic load dispatch(ELD) problem. In order to avoid the local optimum phenomenon, mutation processing is applied to the GSA. The IGSA is applied to solve the economic load dispatch problems with the valve point effects, which has 13 generators and a load demand of 2520 MW. Calculation results show that the algorithm in this paper can deal with the ELD problems with high stability.
Dual strain gage balance system for measuring light loads
NASA Technical Reports Server (NTRS)
Roberts, Paul W. (Inventor)
1991-01-01
A dual strain gage balance system for measuring normal and axial forces and pitching moment of a metric airfoil model imparted by aerodynamic loads applied to the airfoil model during wind tunnel testing includes a pair of non-metric panels being rigidly connected to and extending towards each other from opposite sides of the wind tunnel, and a pair of strain gage balances, each connected to one of the non-metric panels and to one of the opposite ends of the metric airfoil model for mounting the metric airfoil model between the pair of non-metric panels. Each strain gage balance has a first measuring section for mounting a first strain gage bridge for measuring normal force and pitching moment and a second measuring section for mounting a second strain gage bridge for measuring axial force.
Selective randomized load balancing and mesh networks with changing demands
NASA Astrophysics Data System (ADS)
Shepherd, F. B.; Winzer, P. J.
2006-05-01
We consider the problem of building cost-effective networks that are robust to dynamic changes in demand patterns. We compare several architectures using demand-oblivious routing strategies. Traditional approaches include single-hop architectures based on a (static or dynamic) circuit-switched core infrastructure and multihop (packet-switched) architectures based on point-to-point circuits in the core. To address demand uncertainty, we seek minimum cost networks that can carry the class of hose demand matrices. Apart from shortest-path routing, Valiant's randomized load balancing (RLB), and virtual private network (VPN) tree routing, we propose a third, highly attractive approach: selective randomized load balancing (SRLB). This is a blend of dual-hop hub routing and randomized load balancing that combines the advantages of both architectures in terms of network cost, delay, and delay jitter. In particular, we give empirical analyses for the cost (in terms of transport and switching equipment) for the discussed architectures, based on three representative carrier networks. Of these three networks, SRLB maintains the resilience properties of RLB while achieving significant cost reduction over all other architectures, including RLB and multihop Internet protocol/multiprotocol label switching (IP/MPLS) networks using VPN-tree routing.
An efficient algorithm using matrix methods to solve wind tunnel force-balance equations
NASA Technical Reports Server (NTRS)
Smith, D. L.
1972-01-01
An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.
Load Balancing at Emergency Departments using ‘Crowdinforming’
Friesen, Marcia R; Strome, Trevor; Mukhi, Shamir; McLoed, Robert
2011-01-01
Background: Emergency Department (ED) overcrowding is an important healthcare issue facing increasing public and regulatory scrutiny in Canada and around the world. Many approaches to alleviate excessive waiting times and lengths of stay have been studied. In theory, optimal ED patient flow may be assisted via balancing patient loads between EDs (in essence spreading patients more evenly throughout this system). This investigation utilizes simulation to explore “Crowdinforming” as a basis for a process control strategy aimed to balance patient loads between six EDs within a mid-sized Canadian city. Methods: Anonymous patient visit data comprising 120,000 ED patient visits over six months to six ED facilities were obtained from the region’s Emergency Department Information System (EDIS) to (1) determine trends in ED visits and interactions between parameters; (2) to develop a process control strategy integrating crowdinforming; and, (3) apply and evaluate the model in a simulated environment to explore the potential impact on patient self-redirection and load balancing between EDs. Results: As in reality, the data available and subsequent model demonstrated that there are many factors that impact ED patient flow. Initial results suggest that for this particular data set used, ED arrival rates were the most useful metric for ED ‘busyness’ in a process control strategy, and that Emergency Department performance may benefit from load balancing efforts. Conclusions: The simulation supports the use of crowdinforming as a potential tool when used in a process control strategy to balance the patient loads between EDs. The work also revealed that the value of several parameters intuitively expected to be meaningful metrics of ED ‘busyness’ was not evident, highlighting the importance of finding parameters meaningful within one’s particular data set. The information provided in the crowdinforming model is already available in a local context at some ED sites
Population-based learning of load balancing policies for a distributed computer system
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; Wah, Benjamin W.
1993-01-01
Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.
Priority-rotating DBA with adaptive load balance for reconfigurable WDM/TDM PON
NASA Astrophysics Data System (ADS)
Xia, Weidong; Gan, Chaoqin; Xie, Weilun; Ni, Cuiping
2015-12-01
To the wavelength-division multiplexing/time-division multiplexing passive optical network (WDM/TDM PON) architecture that implements wavelength sharing and traffic redirection, a priority-rotating dynamic bandwidth allocation (DBA) algorithm is proposed in this paper. The priority of each ONU is set and rotated to meet the bandwidth demand and guarantee the fairness among optical network units (ONUs). The bandwidth allocation for priority queues is employed to avoid bandwidth monopolization and over-allocation. The bandwidth allocation for high-loaded situation and redirected traffic are discussed to achieve adaptive load balance over wavelengths and among ONUs. The simulation results show a good performance of the proposed algorithm in throughput rate and average packet delay.
NASA Technical Reports Server (NTRS)
Richardson, J.; Labbe, M.; Belala, Y.; Leduc, Vincent
1994-01-01
The requirement for improving aircraft utilization and responsiveness in airlift operations has been recognized for quite some time by the Canadian Forces. To date, the utilization of scarce airlift resources has been planned mainly through the employment of manpower-intensive manual methods in combination with the expertise of highly qualified personnel. In this paper, we address the problem of facilitating the load planning process for military aircraft cargo planes through the development of a computer-based system. We introduce TALBAS (Transport Aircraft Loading and BAlancing System), a knowledge-based system designed to assist personnel involved in preparing valid load plans for the C130 Hercules aircraft. The main features of this system which are accessible through a convivial graphical user interface, consists of the automatic generation of valid cargo arrangements given a list of items to be transported, the user-definition of load plans and the automatic validation of such load plans.
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Sherstnyova, A. I.; Botygin, I. A.; Galanova, N. Y.
2016-06-01
The results of the statistical model experiments research of various load balancing algorithms in distributed computing systems are presented. Software tools were developed. These tools, which allow to create a virtual infrastructure of distributed computing system in accordance with the intended objective of the research focused on multi-agent and multithreaded data processing were developed. A diagram of the control processing of requests from the terminal devices, providing an effective dynamic horizontal scaling of computing power at peak loads, is proposed.
Adaptive dynamic load-balancing with irregular domain decomposition for particle simulations
NASA Astrophysics Data System (ADS)
Begau, Christoph; Sutmann, Godehard
2015-05-01
We present a flexible and fully adaptive dynamic load-balancing scheme, which is designed for particle simulations of three-dimensional systems with short ranged interactions. The method is based on domain decomposition with non-orthogonal non-convex domains, which are constructed based on a local repartitioning of computational work between neighbouring processors. Domains are dynamically adjusted in a flexible way under the condition that the original topology is not changed, i.e. neighbour relations between domains are retained, which guarantees a fixed communication pattern for each domain during a simulation. Extensions of this scheme are discussed and illustrated with examples, which generalise the communication patterns and do not fully restrict data exchange to direct neighbours. The proposed method relies on a linked cell algorithm, which makes it compatible with existing implementations in particle codes and does not modify the underlying algorithm for calculating the forces between particles. The method has been implemented into the molecular dynamics community code IMD and performance has been measured for various molecular dynamics simulations of systems representing realistic problems from materials science. It is found that the method proves to balance the work between processors in simulations with strongly inhomogeneous and dynamically changing particle distributions, which results in a significant increase of the efficiency of the parallel code compared both to unbalanced simulations and conventional load-balancing strategies.
Strain gage selection in loads equations using a genetic algorithm
NASA Technical Reports Server (NTRS)
1994-01-01
Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.
Estimating nutrient loadings using chemical mass balance approach.
Jain, C K; Singhal, D C; Sharma, M K
2007-11-01
The river Hindon is one of the important tributaries of river Yamuna in western Uttar Pradesh (India) and carries pollution loads from various municipal and industrial units and surrounding agricultural areas. The main sources of pollution in the river include municipal wastes from Saharanpur, Muzaffarnagar and Ghaziabad urban areas and industrial effluents of sugar, pulp and paper, distilleries and other miscellaneous industries through tributaries as well as direct inputs. In this paper, chemical mass balance approach has been used to assess the contribution from non-point sources of pollution to the river. The river system has been divided into three stretches depending on the land use pattern. The contribution of point sources in the upper and lower stretches are 95 and 81% respectively of the total flow of the river while there is no point source input in the middle stretch. Mass balance calculations indicate that contribution of nitrate and phosphate from non-point sources amounts to 15.5 and 6.9% in the upper stretch and 13.1 and 16.6% in the lower stretch respectively. Observed differences in the load along the river may be attributed to uncharacterized sources of pollution due to agricultural activities, remobilization from or entrainment of contaminated bottom sediments, ground water contribution or a combination of these sources. PMID:17616829
Load Balancing Scheme on the Basis of Huffman Coding for P2P Information Retrieval
NASA Astrophysics Data System (ADS)
Kurasawa, Hisashi; Takasu, Atsuhiro; Adachi, Jun
Although a distributed index on a distributed hash table (DHT) enables efficient document query processing in Peer-to-Peer information retrieval (P2P IR), the index costs a lot to construct and it tends to be an unfair management because of the unbalanced term frequency distribution. We devised a new distributed index, named Huffman-DHT, for P2P IR. The new index uses an algorithm similar to Huffman coding with a modification to the DHT structure based on the term distribution. In a Huffman-DHT, a frequent term is assigned to a short ID and allocated a large space in the node ID space in DHT. Throuth ID management, the Huffman-DHT balances the index registration accesses among peers and reduces load concentrations. Huffman-DHT is the first approach to adapt concepts of coding theory and term frequency distribution to load balancing. We evaluated this approach in experiments using a document collection and assessed its load balancing capabilities in P2P IR. The experimental results indicated that it is most effective when the P2P system consists of about 30, 000 nodes and contains many documents. Moreover, we proved that we can construct a Huffman-DHT easily by estimating the probability distribution of the term occurrence from a small number of sample documents.
A Hybrid Ant Colony Algorithm for Loading Pattern Optimization
NASA Astrophysics Data System (ADS)
Hoareau, F.
2014-06-01
Electricité de France (EDF) operates 58 nuclear power plant (NPP), of the Pressurized Water Reactor (PWR) type. The loading pattern (LP) optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R&D has developed automatic optimization tools that assist the experts. The latter can resort, for instance, to a loading pattern optimization software based on ant colony algorithm. This paper presents an analysis of the search space of a few realistic loading pattern optimization problems. This analysis leads us to introduce a hybrid algorithm based on ant colony and a local search method. We then show that this new algorithm is able to generate loading patterns of good quality.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1990-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
A Baseline Load Schedule for the Manual Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.
2013-01-01
A baseline load schedule for the manual calibration of a force balance was developed that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, the aft gage location, and the balance moment center; (iv) the balance should be used in UP and DOWN orientation to get axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. Three different approaches are also reviewed that may be used to independently estimate the natural zeros of the balance. These three approaches provide gage output differences that may be used to estimate the weight of both the metric and non-metric part of the balance. Manual calibration data of NASA s MK29A balance and machine calibration data of NASA s MC60D balance are used to illustrate and evaluate different aspects of the proposed baseline load schedule design.
A Baseline Load Schedule for the Manual Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.
2013-01-01
A baseline load schedule for the manual calibration of a force balance is defined that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The chosen load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, aft gage location, and the balance moment center; (iv) the balance should be used in "up" and "down" orientation to get positive and negative axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. In addition, three different approaches are discussed in the paper that may be used to independently estimate the natural zeros, i.e., the gage outputs of the absolute load datum of the balance. These three approaches provide gage output differences that can be used to estimate the weight of both the metric and non-metric part of the balance. Data from the calibration of a six-component force balance will be used in the final manuscript of the paper to illustrate characteristics of the proposed baseline load schedule.
Carmichael, H.
1953-01-01
A torsional-type analytical balance designed to arrive at its equilibrium point more quickly than previous balances is described. In order to prevent external heat sources creating air currents inside the balance casing that would reiard the attainment of equilibrium conditions, a relatively thick casing shaped as an inverted U is placed over the load support arms and the balance beam. This casing is of a metal of good thernnal conductivity characteristics, such as copper or aluminum, in order that heat applied to one portion of the balance is quickly conducted to all other sensitive areas, thus effectively preventing the fornnation of air currents caused by unequal heating of the balance.
Dynamic load balancing in a concurrent plasma PIC code on the JPL/Caltech Mark III hypercube
Liewer, P.C.; Leaver, E.W.; Decyk, V.K.; Dawson, J.M.
1990-12-31
Dynamic load balancing has been implemented in a concurrent one-dimensional electromagnetic plasma particle-in-cell (PIC) simulation code using a method which adds very little overhead to the parallel code. In PIC codes, the orbits of many interacting plasma electrons and ions are followed as an initial value problem as the particles move in electromagnetic fields calculated self-consistently from the particle motions. The code was implemented using the GCPIC algorithm in which the particles are divided among processors by partitioning the spatial domain of the simulation. The problem is load-balanced by partitioning the spatial domain so that each partition has approximately the same number of particles. During the simulation, the partitions are dynamically recreated as the spatial distribution of the particles changes in order to maintain processor load balance.
MCNP load balancing and fault tolerance with PVM
McKinney, G.W.
1995-07-01
Version 4A of the Monte Carlo neutron, photon, and electron transport code MCNP, developed by LANL (Los Alamos National Laboratory), supports distributed-memory multiprocessing through the software package PVM (Parallel Virtual Machine, version 3.1.4). Using PVM for interprocessor communication, MCNP can simultaneously execute a single problem on a cluster of UNIX-based workstations. This capability provided system efficiencies that exceeded 80% on dedicated workstation clusters, however, on heterogeneous or multiuser systems, the performance was limited by the slowest processor (i.e., equal work was assigned to each processor). The next public release of MCNP will provide multiprocessing enhancements that include load balancing and fault tolerance which are shown to dramatically increase multiuser system efficiency and reliability.
Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms
Hu, Zhongyi; Xiong, Tao
2013-01-01
Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425
Some important observations on fast decoupled load flow algorithm
Nanda, J.; Kothari, D.P.; Srivastava, S.C.
1987-05-01
This letter brings out clearly for the first time the relative importance and weightage of some of the assumptions made by B. Scott and O. Alsac in their fast decoupled load flow (FDLF) algorithm on its convergence property. Results have been obtained for two sample IEEE test systems. The conclusions of this work are envisaged to be of immense practical relevance while developing a fast decoupled load flow program.
A single-stage optical load-balanced switch for data centers.
Huang, Qirui; Yeo, Yong-Kee; Zhou, Luying
2012-10-22
Load balancing is an attractive technique to achieve maximum throughput and optimal resource utilization in large-scale switching systems. However current electronic load-balanced switches suffer from severe problems in implementation cost, power consumption and scaling. To overcome these problems, in this paper we propose a single-stage optical load-balanced switch architecture based on an arrayed waveguide grating router (AWGR) in conjunction with fast tunable lasers. By reuse of the fast tunable lasers, the switch achieves both functions of load balancing and switching through the AWGR. With this architecture, proof-of-concept experiments have been conducted to investigate the feasibility of the optical load-balanced switch and to examine its physical performance. Compared to three-stage load-balanced switches, the reported switch needs only half of optical devices such as tunable lasers and AWGRs, which can provide a cost-effective solution for future data centers. PMID:23187266
Combined Load Diagram for a Wind Tunnel Strain-Gage Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
Combined Load Diagrams for Direct-Read, Force, and Moment Balances are discussed in great detail in the paper. The diagrams, if compared with a corresponding combined load plot of a balance calibration data set, may be used to visualize and interpret basic relationships between the applied balance calibration loads and the load components at the forward and aft gage of a strain-age balance. Lines of constant total force and moment are identified in the diagrams. In addition, the lines of pure force and pure moment are highlighted. Finally, lines of constant moment arm are depicted. It is also demonstrated that each quadrant of a Combined Load Diagram has specific regions where the applied total calibration force is at, between, or outside of the balance gage locations. Data from the manual calibration of a Force Balance is used to illustrate the application of a Combined Load Diagram to a realistic data set.
The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava
2016-08-01
This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.
Dynamic load balancing of matrix-vector multiplications on roadrunner compute nodes
Sancho Pitarch, Jose Carlos
2009-01-01
Hybrid architectures that combine general purpose processors with accelerators are being adopted in several large-scale systems such as the petaflop Roadrunner supercomputer at Los Alamos. In this system, dual-core Opteron host processors are tightly coupled with PowerXCell 8i processors within each compute node. In this kind of hybrid architecture, an accelerated mode of operation is typically used to offload performance hotspots in the computation to the accelerators. In this paper we explore the suitability of a variant of this acceleration mode in which the performance hotspots are actually shared between the host and the accelerators. To achieve this we have designed a new load balancing algorithm, which is optimized for the Roadrunner compute nodes, to dynamically distribute computation and associated data between the host and the accelerators at runtime. Results are presented using this approach for sparse and dense matrix-vector multiplications that show load-balancing can improve performance by up to 24% over solely using the accelerators.
Algorithms for Determining Physical Responses of Structures Under Load
NASA Technical Reports Server (NTRS)
Richards, W. Lance; Ko, William L.
2012-01-01
Ultra-efficient real-time structural monitoring algorithms have been developed to provide extensive information about the physical response of structures under load. These algorithms are driven by actual strain data to measure accurately local strains at multiple locations on the surface of a structure. Through a single point load calibration test, these structural strains are then used to calculate key physical properties of the structure at each measurement location. Such properties include the structure s flexural rigidity (the product of the structure's modulus of elasticity, and its moment of inertia) and the section modulus (the moment of inertia divided by the structure s half-depth). The resulting structural properties at each location can be used to determine the structure s bending moment, shear, and structural loads in real time while the structure is in service. The amount of structural information can be maximized through the use of highly multiplexed fiber Bragg grating technology using optical time domain reflectometry and optical frequency domain reflectometry, which can provide a local strain measurement every 10 mm on a single hair-sized optical fiber. Since local strain is used as input to the algorithms, this system serves multiple purposes of measuring strains and displacements, as well as determining structural bending moment, shear, and loads for assessing real-time structural health. The first step is to install a series of strain sensors on the structure s surface in such a way as to measure bending strains at desired locations. The next step is to perform a simple ground test calibration. For a beam of length l (see example), discretized into n sections and subjected to a tip load of P that places the beam in bending, the flexural rigidity of the beam can be experimentally determined at each measurement location x. The bending moment at each station can then be determined for any general set of loads applied during operation.
Genetic Algorithm based Decentralized PI Type Controller: Load Frequency Control
NASA Astrophysics Data System (ADS)
Dwivedi, Atul; Ray, Goshaidas; Sharma, Arun Kumar
2016-05-01
This work presents a design of decentralized PI type Linear Quadratic (LQ) controller based on genetic algorithm (GA). The proposed design technique allows considerable flexibility in defining the control objectives and it does not consider any knowledge of the system matrices and moreover it avoids the solution of algebraic Riccati equation. To illustrate the results of this work, a load-frequency control problem is considered. Simulation results reveal that the proposed scheme based on GA is an alternative and attractive approach to solve load-frequency control problem from both performance and design point of views.
Tseng, Chinyang Henry
2016-01-01
In wireless networks, low-power Zigbee is an excellent network solution for wireless medical monitoring systems. Medical monitoring generally involves transmission of a large amount of data and easily causes bottleneck problems. Although Zigbee's AODV mesh routing provides extensible multi-hop data transmission to extend network coverage, it originally does not, and needs to support some form of load balancing mechanism to avoid bottlenecks. To guarantee a more reliable multi-hop data transmission for life-critical medical applications, we have developed a multipath solution, called Load-Balanced Multipath Routing (LBMR) to replace Zigbee's routing mechanism. LBMR consists of three main parts: Layer Routing Construction (LRC), a Load Estimation Algorithm (LEA), and a Route Maintenance (RM) mechanism. LRC assigns nodes into different layers based on the node's distance to the medical data gateway. Nodes can have multiple next-hops delivering medical data toward the gateway. All neighboring layer-nodes exchange flow information containing current load, which is the used by the LEA to estimate future load of next-hops to the gateway. With LBMR, nodes can choose the neighbors with the least load as the next-hops and thus can achieve load balancing and avoid bottlenecks. Furthermore, RM can detect route failures in real-time and perform route redirection to ensure routing robustness. Since LRC and LEA prevent bottlenecks while RM ensures routing fault tolerance, LBMR provides a highly reliable routing service for medical monitoring. To evaluate these accomplishments, we compare LBMR with Zigbee's AODV and another multipath protocol, AOMDV. The simulation results demonstrate LBMR achieves better load balancing, less unreachable nodes, and better packet delivery ratio than either AODV or AOMDV. PMID:27258297
Tseng, Chinyang Henry
2016-01-01
In wireless networks, low-power Zigbee is an excellent network solution for wireless medical monitoring systems. Medical monitoring generally involves transmission of a large amount of data and easily causes bottleneck problems. Although Zigbee’s AODV mesh routing provides extensible multi-hop data transmission to extend network coverage, it originally does not, and needs to support some form of load balancing mechanism to avoid bottlenecks. To guarantee a more reliable multi-hop data transmission for life-critical medical applications, we have developed a multipath solution, called Load-Balanced Multipath Routing (LBMR) to replace Zigbee’s routing mechanism. LBMR consists of three main parts: Layer Routing Construction (LRC), a Load Estimation Algorithm (LEA), and a Route Maintenance (RM) mechanism. LRC assigns nodes into different layers based on the node’s distance to the medical data gateway. Nodes can have multiple next-hops delivering medical data toward the gateway. All neighboring layer-nodes exchange flow information containing current load, which is the used by the LEA to estimate future load of next-hops to the gateway. With LBMR, nodes can choose the neighbors with the least load as the next-hops and thus can achieve load balancing and avoid bottlenecks. Furthermore, RM can detect route failures in real-time and perform route redirection to ensure routing robustness. Since LRC and LEA prevent bottlenecks while RM ensures routing fault tolerance, LBMR provides a highly reliable routing service for medical monitoring. To evaluate these accomplishments, we compare LBMR with Zigbee’s AODV and another multipath protocol, AOMDV. The simulation results demonstrate LBMR achieves better load balancing, less unreachable nodes, and better packet delivery ratio than either AODV or AOMDV. PMID:27258297
Experience with automatic, dynamic load balancing and adaptive finite element computation
Wheat, S.R.; Devine, K.D.; Maccabe, A.B.
1993-10-01
Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.
Comparison of Iterative and Non-Iterative Strain-Gage Balance Load Calculation Methods
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
The accuracy of iterative and non-iterative strain-gage balance load calculation methods was compared using data from the calibration of a force balance. Two iterative and one non-iterative method were investigated. In addition, transformations were applied to balance loads in order to process the calibration data in both direct read and force balance format. NASA's regression model optimization tool BALFIT was used to generate optimized regression models of the calibration data for each of the three load calculation methods. This approach made sure that the selected regression models met strict statistical quality requirements. The comparison of the standard deviation of the load residuals showed that the first iterative method may be applied to data in both the direct read and force balance format. The second iterative method, on the other hand, implicitly assumes that the primary gage sensitivities of all balance gages exist. Therefore, the second iterative method only works if the given balance data is processed in force balance format. The calibration data set was also processed using the non-iterative method. Standard deviations of the load residuals for the three load calculation methods were compared. Overall, the standard deviations show very good agreement. The load prediction accuracies of the three methods appear to be compatible as long as regression models used to analyze the calibration data meet strict statistical quality requirements. Recent improvements of the regression model optimization tool BALFIT are also discussed in the paper.
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Fast computing global structural balance in signed networks based on memetic algorithm
NASA Astrophysics Data System (ADS)
Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng
2014-12-01
Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.
Assessment of New Load Schedules for the Machine Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.; Kew, R.
2015-01-01
New load schedules for the machine calibration of a six-component force balance are currently being developed and evaluated at the NASA Ames Balance Calibration Laboratory. One of the proposed load schedules is discussed in the paper. It has a total of 2082 points that are distributed across 16 load series. Several criteria were applied to define the load schedule. It was decided, for example, to specify the calibration load set in force balance format as this approach greatly simplifies the definition of the lower and upper bounds of the load schedule. In addition, all loads are assumed to be applied in a calibration machine by using the one-factor-at-a-time approach. At first, all single-component loads are applied in six load series. Then, three two-component load series are applied. They consist of the load pairs (N1, N2), (S1, S2), and (RM, AF). Afterwards, four three-component load series are applied. They consist of the combinations (N1, N2, AF), (S1, S2, AF), (N1, N2, RM), and (S1, S2, RM). In the next step, one four-component load series is applied. It is the load combination (N1, N2, S1, S2). Finally, two five-component load series are applied. They are the load combination (N1, N2, S1, S2, AF) and (N1, N2, S1, S2, RM). The maximum difference between loads of two subsequent data points of the load schedule is limited to 33 % of capacity. This constraint helps avoid unwanted load "jumps" in the load schedule that can have a negative impact on the performance of a calibration machine. Only loadings of the single- and two-component load series are loaded to 100 % of capacity. This approach was selected because it keeps the total number of calibration points to a reasonable limit while still allowing for the application of some of the more complex load combinations. Data from two of NASA's force balances is used to illustrate important characteristics of the proposed 2082-point calibration load schedule.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Simon, Horst D.
1996-01-01
The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.
LAKE MICHIGAN MASS BALANCE: ATRAZINE MODELING AND LOADS
The Lake Michigan Mass Balance Study measured PCBs, mercury, trans-nonachlor, and atrazine in rivers, the atmosphere, sediments, lake water, and the food chain. A mathematical model will predict what effect reducing pollution will have on the lake, and its large fish (lake trout ...
A Load Balanced Domain Decomposition Method Using Wavelet Analysis
Jameson, L; Johnson, J; Hesthaven, J
2001-05-31
Wavelet Analysis provides an orthogonal basis set which is localized in both the physical space and the Fourier transform space. We present here a domain decomposition method that uses wavelet analysis to maintain roughly uniform error throughout the computation domain while keeping the computational work balanced in a parallel computing environment.
Balancing the Load: How to Engage Counselors in School Improvement
ERIC Educational Resources Information Center
Mallory, Barbara J.; Jackson, Mary H.
2007-01-01
Principals cannot lead the school improvement process alone. They must enlist the help of others in the school community. School counselors, whose role is often viewed as peripheral and isolated from teaching and learning, can help principals, teachers, students, and parents balance the duties and responsibilities involved in continuous student…
Dynamic load balancing data centric storage for wireless sensor networks.
Song, Seokil; Bok, Kyoungsoo; Kwak, Yun Sik; Goo, Bongeun; Kwak, Youngsik; Ko, Daesik
2010-01-01
In this paper, a new data centric storage that is dynamically adapted to the work load changes is proposed. The proposed data centric storage distributes the load of hot spot areas to neighboring sensor nodes by using a multilevel grid technique. The proposed method is also able to use existing routing protocols such as GPSR (Greedy Perimeter Stateless Routing) with small changes. Through simulation, the proposed method enhances the lifetime of sensor networks over one of the state-of-the-art data centric storages. We implement the proposed method based on an operating system for sensor networks, and evaluate the performance through running based on a simulation tool. PMID:22163472
Using Multithreading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Bailey, David H. (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes. The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the question phase of FE applications on triangular meshes, and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments on EARTH-SP2, an implementation of EARTH on the IBM SP2, with different load balancing strategies that are built into the runtime system.
Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations. PMID:25429460
Venugopal, S.; Naik, V.K.
1991-10-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
Effects of partitioning and scheduling sparse matrix factorization on communication and load balance
NASA Technical Reports Server (NTRS)
Venugopal, Sesh; Naik, Vijay K.
1991-01-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our
Thulasidasan, Sunil; Kasiviswanathan, Shiva; Eidenbenz, Stephan; Romero, Philip
2010-01-01
We re-examine the problem of load balancing in conservatively synchronized parallel, discrete-event simulations executed on high-performance computing clusters, focusing on simulations where computational and messaging load tend to be spatially clustered. Such domains are frequently characterized by the presence of geographic 'hot-spots' - regions that generate significantly more simulation events than others. Examples of such domains include simulation of urban regions, transportation networks and networks where interaction between entities is often constrained by physical proximity. Noting that in conservatively synchronized parallel simulations, the speed of execution of the simulation is determined by the slowest (i.e most heavily loaded) simulation process, we study different partitioning strategies in achieving equitable processor-load distribution in domains with spatially clustered load. In particular, we study the effectiveness of partitioning via spatial scattering to achieve optimal load balance. In this partitioning technique, nearby entities are explicitly assigned to different processors, thereby scattering the load across the cluster. This is motivated by two observations, namely, (i) since load is spatially clustered, spatial scattering should, intuitively, spread the load across the compute cluster, and (ii) in parallel simulations, equitable distribution of CPU load is a greater determinant of execution speed than message passing overhead. Through large-scale simulation experiments - both of abstracted and real simulation models - we observe that scatter partitioning, even with its greatly increased messaging overhead, significantly outperforms more conventional spatial partitioning techniques that seek to reduce messaging overhead. Further, even if hot-spots change over the course of the simulation, if the underlying feature of spatial clustering is retained, load continues to be balanced with spatial scattering leading us to the observation that
Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Coupling Algorithms for Calculating Sensitivities of Population Balances
Man, P. L. W.; Kraft, M.; Norris, J. R.
2008-09-01
We introduce a new class of stochastic algorithms for calculating parametric derivatives of the solution of the space-homogeneous Smoluchowski's coagulation equation. Currently, it is very difficult to produce low variance estimates of these derivatives in reasonable amounts of computational time through the use of stochastic methods. These new algorithms consider a central difference estimator of the parametric derivative which is calculated by evaluating the coagulation equation at two different parameter values simultaneously, and causing variance reduction by maximising the covariance between these. The two different coupling strategies ('Single' and 'Double') have been compared to the case when there is no coupling ('Independent'). Both coupling algorithms converge and the Double coupling is the most 'efficient' algorithm. For the numerical example chosen we obtain a factor of about 100 in efficiency in the best case (small system evolution time and small parameter perturbation)
Sivakumar, B.; Bhalaji, N.; Sivakumar, D.
2014-01-01
In mobile ad hoc networks connectivity is always an issue of concern. Due to dynamism in the behavior of mobile nodes, efficiency shall be achieved only with the assumption of good network infrastructure. Presence of critical links results in deterioration which should be detected in advance to retain the prevailing communication setup. This paper discusses a short survey on the specialized algorithms and protocols related to energy efficient load balancing for critical link detection in the recent literature. This paper also suggests a machine learning based hybrid power-aware approach for handling critical nodes via load balancing. PMID:24790546
The work/exchange model: A generalized approach to dynamic load balancing
Wikstrom, M.C.
1991-12-20
A crucial concern in software development is reducing program execution time. Parallel processing is often used to meet this goal. However, parallel processing efforts can lead to many pitfalls and problems. One such problem is to distribute the workload among processors in such a way that minimum execution time is obtained. The common approach is to use a load balancer to distribute equal or nearly equal quantities of workload on each processor. Unfortunately, this approach relies on a naive definition of load imbalance and often fails to achieve the desired goal. A more sophisticated definition should account for the affects of additional factors including communication delay costs, network contention, and architectural issues. Consideration of additional factors led us to the realization that optical load distribution does not always result from equal load distribution. In this dissertation, we tackle the difficult problem of defining load imbalance. This is accomplished through the development of a parallel program model called the Generalized Work/Exchange Model. Associated with the model are equations for a restricted set of deterministically balanced programs that characterize idle time, elapsed time, and potential speedup. With the aid of the model, several common myths about load imbalance are exposed. A useful application called a load balancer enhancer is also presented which is applicable to the more general, quasi-static load unbalanced program.
Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs
Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang
2015-01-01
Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n2), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k2n2) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652
STAR load balancing and tiered-storage infrastructure strategy for ultimate db access
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.; Betts, W.; Didenko, L.; Van Buren, G.
2011-12-01
In recent years, the STAR experiment's database demands have grown in accord not only with simple facility growth, but also with a growing physics program. In addition to the accumulated metadata from a decade of operations, refinements to detector calibrations force user analysis to access database information post data production. Users may access any year's data at any point in time, causing a near random access of the metadata queried, contrary to time-organized production cycles. Moreover, complex online event selection algorithms created a query scarcity ("sparsity") scenario for offline production further impacting performance. Fundamental changes in our hardware approach were hence necessary to improve query speed. Initial strategic improvements were focused on developing fault-tolerant, load-balanced access to a multi-slave infrastructure. Beyond that, we explored, tested and quantified the benefits of introducing a Tiered storage architecture composed of conventional drives, solid-state disks, and memory-resident databases as well as leveraging the use of smaller database services fitting in memory. The results of our extensive testing in real life usage are presented.
Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs.
Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang
2015-01-01
Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n (2)), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k (2) n (2)) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
Load-balancing techniques for a parallel electromagnetic particle-in-cell code
PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.
2000-01-01
QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.
Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic
Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
An evaluation of inside surface heat balance models for cooling load calculations
Liesen, R.J.; Pedersen, C.O.
1997-12-31
The heat balance method is a fundamental procedure that can be used for a specified control volume to describe building physics. With a better understanding of building physics and the cost-effectiveness of computers, these types of procedures are accessible to all practicing engineers. The heat balance method describes the processes using the three fundamental modes of heat transfer: conduction, convection, and radiation. The control volumes naturally divide the building processes into an outside balance, an inside balance, an air balance, and conduction through the building elements. This allows the building heat balance to be solved in a number of fundamental ways. This paper looks at the general formulation of the inside surface heat balance from the conduction through the building elements to the radiant exchange and convection to the air in the zone. Development of many radiant exchange models is shown; these models range from the exact solutions using uniform radiosity networks and exact view factors to mean radiant temperature (MRT) and area-weighted view factors. These radiant exchange models are directly compared to each other for a simple zone with varying aspect ratios. The radiant exchange models are then compared to determine their effect on the cooling load. Finally, other parameters that affect the inside surface heat balance are investigated to determine their sensitivity to the cooling load.
A massively parallel adaptive finite element method with dynamic load balancing
Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.
1993-05-01
We construct massively parallel, adaptive finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. We also present results using adaptive p-refinement to reduce the computational cost of the method. We describe tiling, a dynamic, element-based data migration system. Tiling dynamically maintains global load balance in the adaptive method by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. We demonstrate the effectiveness of the dynamic load balancing with adaptive p-refinement examples.
A model for resource-aware load balancing on heterogeneous clusters.
Devine, Karen Dragon; Flaherty, Joseph E.; Teresco, James Douglas; Gervasio Luis G.; Faik, Jamal
2005-05-01
We address the problem of partitioning and dynamic load balancing on clusters with heterogeneous hardware resources. We propose DRUM, a model that encapsulates hardware resources and their interconnection topology. DRUM provides monitoring facilities for dynamic evaluation of communication, memory, and processing capabilities. Heterogeneity is quantified by merging the information from the monitors to produce a scalar number called 'power.' This power allows DRUM to be used easily by existing load-balancing procedures such as those in the Zoltan Toolkit while placing minimal burden on application programmers. We demonstrate the use of DRUM to guide load balancing in the adaptive solution of a Laplace equation on a heterogeneous cluster. We observed a significant reduction in execution time compared to traditional methods.
Game and Balance Multicast Architecture Algorithms for Sensor Grid
Fan, Qingfeng; Wu, Qiongli; Magoulés, Frèdèric; Xiong, Naixue; Vasilakos, Athanasios V.; He, Yanxiang
2009-01-01
We propose a scheme to attain shorter multicast delay and higher efficiency in the data transfer of sensor grid. Our scheme, in one cluster, seeks the central node, calculates the space and the data weight vectors. Then we try to find a new vector composed by linear combination of the two old ones. We use the equal correlation coefficient between the new and old vectors to find the point of game and balance of the space and data factorsbuild a binary simple equation, seek linear parameters, and generate a least weight path tree. We handled the issue from a quantitative way instead of a qualitative way. Based on this idea, we considered the scheme from both the space and data factor, then we built the mathematic model, set up game and balance relationship and finally resolved the linear indexes, according to which we improved the transmission efficiency of sensor grid. Extended simulation results indicate that our scheme attains less average multicast delay and number of links used compared with other well-known existing schemes. PMID:22399992
A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning
Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2013-11-17
In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.
2012-01-01
We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
Genetic Algorithm Used for Load Shedding Based on Sensitivity to Enhance Voltage Stability
NASA Astrophysics Data System (ADS)
Titare, L. S.; Singh, P.; Arya, L. D.
2014-12-01
This paper presents an algorithm to calculate optimum load shedding with voltage stability consideration based on sensitivity of proximity indicator using genetic algorithm (GA). Schur's inequality based proximity indicator of load flow Jacobian has been selected, which indicates system state. Load flow Jacobian of the system is obtained using Continuation power flow method. If reactive power and active rescheduling are exhausted, load shedding is the last line of defense to maintain the operational security of the system. Load buses for load shedding have been selected on the basis of sensitivity of proximity indicator. The load bus having large sensitivity is selected for load shedding. Proposed algorithm predicts load bus rank and optimum load to be shed on load buses. The algorithm accounts inequality constraints not only in present operating conditions, but also for predicted next interval load (with load shedding). Developed algorithm has been implemented on IEEE 6-bus system. Results have been compared with those obtained using Teaching-Learning-Based Optimization (TLBO), particle swarm optimization (PSO) and its variant.
Design and implementation of web server soft load balancing in small and medium-sized enterprise
NASA Astrophysics Data System (ADS)
Yan, Liu
2011-12-01
With the expansion of business scale, small and medium-sized enterprises began to use information platform to improve their management and competition ability, the server becomes the core factor which restricts the enterprise's infomationization construction. This paper puts forward a suitable design scheme for small and medium-sized enterprise web server soft load balancing, and proved it effective through experiment.
Portable Parallel Programming for the Dynamic Load Balancing of Unstructured Grid Applications
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Das, Sajal K.; Harvey, Daniel; Oliker, Leonid
1999-01-01
The ability to dynamically adapt an unstructured -rid (or mesh) is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult, particularly from the view point of portability on various multiprocessor platforms We address this problem by developing PLUM, tin automatic anti architecture-independent framework for adaptive numerical computations in a message-passing environment. Portability is demonstrated by comparing performance on an SP2, an Origin2000, and a T3E, without any code modifications. We also present a general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication pattern, with a goal to providing a global view of system loads across processors. Experiments on, an SP2 and an Origin2000 demonstrate the portability of our approach which achieves superb load balance at the cost of minimal extra overhead.
Parallelization and load balancing of a comprehensive atmospheric chemistry transport model
NASA Astrophysics Data System (ADS)
Elbern, Hendrik
Chemistry transport models are generally claimed to be well suited for massively parallel processing on distributed memory architectures since the arithmetic-to-communication ratio is usually high. However, this observation proves insufficient to account for an efficient parallel performance with increasing complexity of the model. The modeling of the local state of the atmosphere ensues very different branches of the modules' code and greater differences in the computational work load and, consequently, runtime of individual processors occur to a much larger extent during a time step than reported for meteorological models. Variable emissions, changes in actinic fluxes, and all processes associated with cloud modeling are highly variable in time and space and are identified to induce large load imbalances which severely affect the parallel efficiency. This is more so, when the model domain encompasses more heterogeneous meteorological or regional regimes, which impinge dissimilarly on simulations of atmospheric chemistry processes. These conditions hold for the EURAD model applied in this study, which covers the European continental scale as integration domain. Based on a master-worker configuration with a horizontal grid partitioning approach, a method is proposed where the integration domain of the individual processors is locally adjusted to accommodate for load imbalances. This ensures a minimal communication volume and data exchange only with the next neighbors. The interior boundary adjustments of the processors are combined with routine boundary exchange which is required each time step anyway. Two dynamic load balancing schemes were implemented and compared against a conventional equal area partition and a static load balancing scheme. The methods are devised for massively parallel distributed memory computers of both, Single and Multiple Instruction stream Multiple Data stream (SIMD, MIMD) types. A midsummer episode of highly elevated ozone concentrations
Gammon - A load balancing strategy for local computer systems with multiaccess networks
NASA Technical Reports Server (NTRS)
Baumgartner, Katherine M.; Wah, Benjamin W.
1989-01-01
Consideration is given to an efficient load-balancing strategy, Gammon (global allocation from maximum to minimum in constant time), for distributed computing systems connected by multiaccess local area networks. The broadcast capability of these networks is utilized to implement an identification procedure at the applications level for the maximally and the minimally loaded processors. The search technique has an average overhead which is independent of the number of participating stations. An implementation of Gammon on a network of Sun workstations is described. Its performance is found to be better than that of other known methods.
NASA Astrophysics Data System (ADS)
Engelder, Terry; Fischer, Mark P.
1996-05-01
Using the Griffith energy-balance concept to model joint propagation in the brittle crust, two laboratory loading configurations serve as appropriate analogs for in situ conditions: the dead-weight load and the fixed-grips load. The distinction between these loading configurations is based largely on whether or not a loaded boundary moves as a joint grows. During displacement of a loaded boundary, the energy necessary for joint propagation comes from work by the dead weight (i.e., a remote stress). When the loaded boundary remains stationary, as if held by rigid grips, the energy for joint propagation develops upon release of elastic strain energy within the rock mass. These two generic loading configurations serve as models for four common natural loading configurations: a joint-normal load; a thermoelastic load; a fluid load; and an axial load. Each loading configuration triggers a different joint-driving mechanism, each of which is the release of energy through elastic strain and/or work. The four mechanisms for energy release are joint-normal stretching, elastic contraction, poroelastic contraction under either a constant fluid drive or fluid decompression, and axial shortening, respectively. Geological circumstances favoring each of the joint-driving mechanisms are as follows. The release of work under joint-normal stretching occurs whenever layer-parallel extension keeps pace with slow or subcritical joint propagation. Under fixed grips, a substantial crack-normal tensile stress can accumulate by thermoelastic contraction until joint propagation is driven by the release of elastic strain energy. Within the Earth the rate of joint propagation dictates which of these two driving mechanisms operates, with faster propagation driven by release of strain energy. Like a dead-weight load acting to separate the joint walls, pore fluid exerts a traction on the interior of some joints. Joint propagation under fluid loading may be driven by a release of elastic strain
Development of a two wheeled self balancing robot with speech recognition and navigation algorithm
NASA Astrophysics Data System (ADS)
Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh
2016-07-01
This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.
NASA Astrophysics Data System (ADS)
Wang, J.; Samms, T.; Meier, C.; Simmons, L.; Miller, D.; Bathke, D.
2005-12-01
Spatial evapotranspiration (ET) is usually estimated by Surface Energy Balance Algorithm for Land. The average accuracy of the algorithm is 85% on daily basis and 95% on seasonable basis. However, the accuracy of the algorithm varies from 67% to 95% on instantaneous ET estimates and, as reported in 18 studies, 70% to 98% on 1 to 10-day ET estimates. There is a need to understand the sensitivity of the ET calculation with respect to the algorithm variables and equations. With an increased understanding, information can be developed to improve the algorithm, and to better identify the key variables and equations. A Modified Surface Energy Balance Algorithm for Land (MSEBAL) was developed and validated with data from a pecan orchard and an alfalfa field. The MSEBAL uses ground reflectance and temperature data from ASTER sensors along with humidity, wind speed, and solar radiation data from a local weather station. MSEBAL outputs hourly and daily ET with 90 m by 90 m resolution. A sensitivity analysis was conducted for MSEBAL on ET calculation. In order to observe the sensitivity of the calculation to a particular variable, the value of that variable was changed while holding the magnitudes of the other variables. The key variables and equations to which the ET calculation most sensitive were determined in this study. href='http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE">http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
Valiant Load-Balancing: Building Networks That Can Support All Traffic Matrices
NASA Astrophysics Data System (ADS)
Zhang-Shen, Rui
This paper is a brief survey on how Valiant load-balancing (VLB) can be used to build networks that can efficiently and reliably support all traffic matrices. We discuss how to extend VLB to networks with heterogeneous capacities, how to protect against failures in a VLB network, and how to interconnect two VLB networks. For the readers' reference, included also is a list of work that uses VLB in various aspects of networking.
Zemková, E; Štefániková, G; Muyor, J M
2016-08-01
This study investigates test-retest reliability and diagnostic accuracy of the load release balance test under four varied conditions. Young, early and late middle-aged physically active and sedentary subjects performed the test over 2 testing sessions spaced 1week apart while standing on either (1) a stable or (2) an unstable surface with (3) eyes open (EO) and (4) eyes closed (EC), respectively. Results identified that test-retest reliability of parameters of the load release balance test was good to excellent, with high values of ICC (0.78-0.92) and low SEM (7.1%-10.7%). The peak and the time to peak posterior center of pressure (CoP) displacement were significantly lower in physically active as compared to sedentary young adults (21.6% and 21.0%) and early middle-aged adults (22.0% and 20.9%) while standing on a foam surface with EO, and in late middle-aged adults on both unstable (25.6% and 24.5%) and stable support surfaces with EO (20.4% and 20.0%). The area under the ROC curve >0.80 for these variables indicates good discriminatory accuracy. Thus, these variables of the load release balance test measured under unstable conditions have the ability to differentiate between groups of physically active and sedentary adults as early as from 19years of age. PMID:27203382
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
Solar Load Voltage Tracking for Water Pumping: An Algorithm
NASA Astrophysics Data System (ADS)
Kappali, M.; Udayakumar, R. Y.
2014-07-01
Maximum power is to be harnessed from solar photovoltaic (PV) panel to minimize the effective cost of solar energy. This is accomplished by maximum power point tracking (MPPT). There are different methods to realise MPPT. This paper proposes a simple algorithm to implement MPPT lv method in a closed loop environment for centrifugal pump driven by brushed PMDC motor. Simulation testing of the algorithm is done and the results are found to be encouraging and supportive of the proposed method MPPT lv .
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Load identification approach based on basis pursuit denoising algorithm
NASA Astrophysics Data System (ADS)
Ginsberg, D.; Ruby, M.; Fritzen, C. P.
2015-07-01
The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.
A massively parallel adaptive finite element method with dynamic load balancing
Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.
1993-12-31
The authors construct massively parallel adaptive finite element methods for the solution of hyperbolic conservation laws. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. They demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. They present results using adaptive p-refinement to reduce the computational cost of the method, and tiling, a dynamic, element-based data migration system that maintains global load balance of the adaptive method by overlapping neighborhoods of processors that each perform local balancing.
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
A Novel Control algorithm based DSTATCOM for Load Compensation
NASA Astrophysics Data System (ADS)
R, Sreejith; Pindoriya, Naran M.; Srinivasan, Babji
2015-11-01
Distribution Static Compensator (DSTATCOM) has been used as a custom power device for voltage regulation and load compensation in the distribution system. Controlling the switching angle has been the biggest challenge in DSTATCOM. Till date, Proportional Integral (PI) controller is widely used in practice for load compensation due to its simplicity and ability. However, PI Controller fails to perform satisfactorily under parameters variations, nonlinearities, etc. making it very challenging to arrive at best/optimal tuning values for different operating conditions. Fuzzy logic and neural network based controllers require extensive training and perform better under limited perturbations. Model predictive control (MPC) is a powerful control strategy, used in the petrochemical industry and its application has been spread to different fields. MPC can handle various constraints, incorporate system nonlinearities and utilizes the multivariate/univariate model information to provide an optimal control strategy. Though it finds its application extensively in chemical engineering, its utility in power systems is limited due to the high computational effort which is incompatible with the high sampling frequency in these systems. In this paper, we propose a DSTATCOM based on Finite Control Set Model Predictive Control (FCS-MPC) with Instantaneous Symmetrical Component Theory (ISCT) based reference current extraction is proposed for load compensation and Unity Power Factor (UPF) action in current control mode. The proposed controller performance is evaluated for a 3 phase, 3 wire, 415 V, 50 Hz distribution system in MATLAB Simulink which demonstrates its applicability in real life situations.
Agent based modeling of "crowdinforming" as a means of load balancing at emergency departments.
Neighbour, Ryan; Oppenheimer, Luis; Mukhi, Shamir N; Friesen, Marcia R; McLeod, Robert D
2010-01-01
This work extends ongoing development of a framework for modeling the spread of contact-transmission infectious diseases. The framework is built upon Agent Based Modeling (ABM), with emphasis on urban scale modelling integrated with institutional models of hospital emergency departments. The method presented here includes ABM modeling an outbreak of influenza-like illness (ILI) with concomitant surges at hospital emergency departments, and illustrates the preliminary modeling of 'crowdinforming' as an intervention. 'Crowdinforming', a component of 'crowdsourcing', is characterized as the dissemination of collected and processed information back to the 'crowd' via public access. The objective of the simulation is to allow for effective policy evaluation to better inform the public of expected wait times as part of their decision making process in attending an emergency department or clinic. In effect, this is a means of providing additional decision support garnered from a simulation, prior to real world implementation. The conjecture is that more optimal service delivery can be achieved under balanced patient loads, compared to situations where some emergency departments are overextended while others are underutilized. Load balancing optimization is a common notion in many operations, and the simulation illustrates that 'crowdinforming' is a potential tool when used as a process control parameter to balance the load at emergency departments as well as serving as an effective means to direct patients during an ILI outbreak with temporary clinics deployed. The information provided in the 'crowdinforming' model is readily available in a local context, although it requires thoughtful consideration in its interpretation. The extension to a wider dissemination of information via a web service is readily achievable and presents no technical obstacles, although political obstacles may be present. The 'crowdinforming' simulation is not limited to arrivals of patients at
A new evolutionary algorithm with structure mutation for the maximum balanced biclique problem.
Yuan, Bo; Li, Bin; Chen, Huanhuan; Yao, Xin
2015-05-01
The maximum balanced biclique problem (MBBP), an NP-hard combinatorial optimization problem, has been attracting more attention in recent years. Existing node-deletion-based algorithms usually fail to find high-quality solutions due to their easy stagnation in local optima, especially when the scale of the problem grows large. In this paper, a new algorithm for the MBBP, evolutionary algorithm with structure mutation (EA/SM), is proposed. In the EA/SM framework, local search complemented with a repair-assisted restart process is adopted. A new mutation operator, SM, is proposed to enhance the exploration during the local search process. The SM can change the structure of solutions dynamically while keeping their size (fitness) and the feasibility unchanged. It implements a kind of large mutation in the structure space of MBBP to help the algorithm escape from local optima. An MBBP-specific local search operator is designed to improve the quality of solutions efficiently; besides, a new repair-assisted restart process is introduced, in which the Marchiori's heuristic repair is modified to repair every new solution reinitialized by an estimation of distribution algorithm (EDA)-like process. The proposed algorithm is evaluated on a large set of benchmark graphs with various scales and densities. Experimental results show that: 1) EA/SM produces significantly better results than the state-of-the-art heuristic algorithms; 2) it also outperforms a repair-based EDA and a repair-based genetic algorithm on all benchmark graphs; and 3) the advantages of EA/SM are mainly due to the introduction of the new SM operator and the new repair-assisted restart process. PMID:25137737
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture
NASA Astrophysics Data System (ADS)
Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea
2014-05-01
Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.
An analytical algorithm for 3D magnetic field mapping of a watt balance magnet
NASA Astrophysics Data System (ADS)
Fu, Zhuang; Zhang, Zhonghua; Li, Zhengkun; Zhao, Wei; Han, Bing; Lu, Yunfeng; Li, Shisong
2016-04-01
A yoke-based permanent magnet, which has been employed in many watt balances at national metrology institutes, is supposed to generate strong and uniform magnetic field in an air gap in the radial direction. However, in reality the fringe effect due to the finite height of the air gap will introduce an undesired vertical magnetic component to the air gap, which should either be measured or modeled towards some optimizations of the watt balance. A recent publication, i.e. Li et al (2015 Metrologia 52 445), presented a full field mapping method, which in theory will supply useful information for profile characterization and misalignment analysis. This article is an additional material of Li et al (2015 Metrologia 52 445), which develops a different analytical algorithm to represent the 3D magnetic field of a watt balance magnet based on only one measurement for the radial magnetic flux density along the vertical direction, B r (z). The new algorithm is based on the electromagnetic nature of the magnet, which has a much better accuracy.
Enhanced exchange algorithm without detailed balance condition for replica exchange method
NASA Astrophysics Data System (ADS)
Kondo, Hiroko X.; Taiji, Makoto
2013-06-01
The replica exchange method (REM) is a powerful tool for the conformational sampling of biomolecules. In this study, we propose an enhanced exchange algorithm for REM not meeting the detailed balance condition (DBC), but satisfying the balance condition in all considered exchanges between two replicas. Breaking the DBC can minimize the rejection rate and make an exchange process rejection-free as the number of replicas increases. To enhance the efficiency of REM, all possible pairs—not only the nearest neighbor—were considered in the exchange process. The test simulations of the alanine dipeptide confirmed the correctness of our method. The average traveling distance of each replica in the temperature distribution was also increased in proportion to an increase in the exchange rate. Furthermore, we applied our algorithm to the conformational sampling of the 10-residue miniprotein, chignolin, with an implicit solvent model. The results showed a faster convergence in the calculation of its free energy landscape, compared to that achieved using the normal exchange method of adjacent pairs. This algorithm can also be applied to the conventional near neighbor method and is expected to reduce the required number of replicas.
NASA Astrophysics Data System (ADS)
Ghani Abro, Abdul; Mohamad-Saleh, Junita
2014-10-01
The prime motive of economic load dispatch (ELD) is to optimize the production cost of electrical power generation through appropriate division of load demand among online generating units. Bio-inspired optimization algorithms have outperformed classical techniques for optimizing the production cost. Probability-selection artificial bee colony (PS-ABC) algorithm is a recently proposed variant of ABC optimization algorithm. PS-ABC generates optimal solutions using three different mutation equations simultaneously. The results show improved performance of PS-ABC over the ABC algorithm. Nevertheless, all the mutation equations of PS-ABC are excessively self-reinforced and, hence, PS-ABC is prone to premature convergence. Therefore, this research work has replaced the mutation equations and has improved the scout-bee stage of PS-ABC for enhancing the algorithm's performance. The proposed algorithm has been compared with many ABC variants and numerous other optimization algorithms on benchmark functions and ELD test cases. The adapted ELD test cases comprise of transmission losses, multiple-fuel effect, valve-point effect and toxic gases emission constraints. The results reveal that the proposed algorithm has the best capability to yield the optimal solution for the problem among the compared algorithms.
A load balancing bufferless deflection router for network-on-chip
NASA Astrophysics Data System (ADS)
Xiaofeng, Zhou; Zhangming, Zhu; Duan, Zhou
2016-07-01
The bufferless router emerges as an interesting option for cost-efficient in network-on-chip (NoC) design. However, the bufferless router only works well under low network load because deflection more easily occurs as the injection rate increases. In this paper, we propose a load balancing bufferless deflection router (LBBDR) for NoC that relieves the effect of deflection in bufferless NoC. The proposed LBBDR employs a balance toggle identifier in the source router to control the initial routing direction of X or Y for a flit in the network. Based on this mechanism, the flit is routed according to XY or YX routing in the network afterward. When two or more flits contend the same one desired output port a priority policy called nearer-first is used to address output ports allocation contention. Simulation results show that the proposed LBBDR yields an improvement of routing performance over the reported bufferless routing in the flit deflection rate, average packet latency and throughput by up to 13%, 10% and 6% respectively. The layout area and power consumption compared with the reported schemes are 12% and 7% less respectively. Project supported by the National Natural Science Foundation of China (Nos. 61474087, 61322405, 61376039).
Senay, Gabriel B.
2008-01-01
The main objective of this study is to present an improved modeling technique called Vegetation ET (VegET) that integrates commonly used water balance algorithms with remotely sensed Land Surface Phenology (LSP) parameter to conduct operational vegetation water balance modeling of rainfed systems at the LSP’s spatial scale using readily available global data sets. Evaluation of the VegET model was conducted using Flux Tower data and two-year simulation for the conterminous US. The VegET model is capable of estimating actual evapotranspiration (ETa) of rainfed crops and other vegetation types at the spatial resolution of the LSP on a daily basis, replacing the need to estimate crop- and region-specific crop coefficients.
van Loosdregt, Inge A E W; Argento, Giulia; Driessen-Mol, Anita; Oomens, Cees W J; Baaijens, Frank P T
2014-06-27
Preclinical studies of tissue-engineered heart valves (TEHVs) showed retraction of the heart valve leaflets as major failure of function mechanism. This retraction is caused by both passive and active cell stress and passive matrix stress. Cell-mediated retraction induces leaflet shortening that may be counteracted by the hemodynamic loading of the leaflets during diastole. To get insight into this stress balance, the amount and duration of stress generation in engineered heart valve tissue and the stress imposed by physiological hemodynamic loading are quantified via an experimental and a computational approach, respectively. Stress generation by cells was measured using an earlier described in vitro model system, mimicking the culture process of TEHVs. The stress imposed by the blood pressure during diastole on a valve leaflet was determined using finite element modeling. Results show that for both pulmonary and systemic pressure, the stress imposed on the TEHV leaflets is comparable to the stress generated in the leaflets. As the stresses are of similar magnitude, it is likely that the imposed stress cannot counteract the generated stress, in particular when taking into account that hemodynamic loading is only imposed during diastole. This study provides a rational explanation for the retraction found in preclinical studies of TEHVs and represents an important step towards understanding the retraction process seen in TEHVs by a combined experimental and computational approach. PMID:24268314
Berg, Jonathan Charles; Halse, Chris; Crowther, Ashley; Barlas, Thanasis; Wilson, David Gerald; Berg, Dale E.; Resor, Brian Ray
2010-06-01
Prior work on active aerodynamic load control (AALC) of wind turbine blades has demonstrated that appropriate use of this technology has the potential to yield significant reductions in blade loads, leading to a decrease in wind cost of energy. While the general concept of AALC is usually discussed in the context of multiple sensors and active control devices (such as flaps) distributed over the length of the blade, most work to date has been limited to consideration of a single control device per blade with very basic Proportional Derivative controllers, due to limitations in the aeroservoelastic codes used to perform turbine simulations. This work utilizes a new aeroservoelastic code developed at Delft University of Technology to model the NREL/Upwind 5 MW wind turbine to investigate the relative advantage of utilizing multiple-device AALC. System identification techniques are used to identify the frequencies and shapes of turbine vibration modes, and these are used with modern control techniques to develop both Single-Input Single-Output (SISO) and Multiple-Input Multiple-Output (MIMO) LQR flap controllers. Comparison of simulation results with these controllers shows that the MIMO controller does yield some improvement over the SISO controller in fatigue load reduction, but additional improvement is possible with further refinement. In addition, a preliminary investigation shows that AALC has the potential to reduce off-axis gearbox loads, leading to reduced gearbox bearing fatigue damage and improved lifetimes.
A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2016-01-01
A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition.
PARALLEL IMPLEMENTATION OF THE TOPAZ OPACITY CODE: ISSUES IN LOAD-BALANCING
Sonnad, V; Iglesias, C A
2008-05-12
The TOPAZ opacity code explicitly includes configuration term structure in the calculation of bound-bound radiative transitions. This approach involves myriad spectral lines and requires the large computational capabilities of parallel processing computers. It is important, however, to make use of these resources efficiently. For example, an increase in the number of processors should yield a comparable reduction in computational time. This proportional 'speedup' indicates that very large problems can be addressed with massively parallel computers. Opacity codes can readily take advantage of parallel architecture since many intermediate calculations are independent. On the other hand, since the different tasks entail significantly disparate computational effort, load-balancing issues emerge so that parallel efficiency does not occur naturally. Several schemes to distribute the labor among processors are discussed.
Meta-heuristic algorithm to solve two-sided assembly line balancing problems
NASA Astrophysics Data System (ADS)
Wirawan, A. D.; Maruf, A.
2016-02-01
Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.
Gong, Li-gang; Yang, Wen-lun
2014-01-01
Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555
Li, Bai; Gong, Li-gang; Yang, Wen-lun
2014-01-01
Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555
Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes
Garrett, W.R.
1981-08-04
A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Preferably the spring means itself is a double acting compression spring means wherein the same spring means is compressed whether the joint is extended or contracted. The damper has a like low spring rate over a considerable range of deflection, both upon extension and contraction of the joint, but a gradually then rapidly increased spring rate upon approaching the travel limits in each direction. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The spring rings make only such line contact with one of the telescoping members as is required for guidance therefrom, and no contact with the other member. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. Magnetic and electrical means are provided to check for the presence and condition of the lubricant. To increase load capacity the spring means is made of a number of components acting in parallel.
NASA Astrophysics Data System (ADS)
Alfredsen, K. T.; Killingtveit, A.
2011-12-01
About 99% of the total energy production in Norway comes from hydropower, and the total production of about 120 TWh makes Norway Europe's largest hydropower producer. Most hydropower systems in Norway are based on high-head plants with mountain storage reservoirs and tunnels transporting water from the reservoirs to the power plants. In total, Norwegian reservoirs contributes around 50% of the total energy storage capacity in Europe. Current strategies to reduce emission of greenhouse gases from energy production involve increased focus on renewable energy sources, e.g. the European Union's 202020 goal in which renewable energy sources should be 20% of the total energy production by 2020. To meet this goal new renewable energy installations must be developed on a large scale in the coming years, and wind power is the main focus for new developments. Hydropower can contribute directly to increase renewable energy through new development or extensions to existing systems, but maybe even more important is the potential to use hydropower systems with storage for load balancing in a system with increased amount of non-storable renewable energies. Even if new storage technologies are under development, hydro storage is the only technology available on a large scale and the most economical feasible alternative. In this respect the Norwegian system has a high potential both through direct use of existing reservoirs and through an increased development of pump storage plants utilizing surplus wind energy to pump water and then producing during periods with low wind input. Through cables to Europe, Norwegian hydropower could also provide balance power for the North European market. Increased peaking and more variable operation of the current hydropower system will present a number of technical and environmental challenges that needs to be identified and mitigated. A more variable production will lead to fluctuating flow in receiving rivers and reservoirs, and it will also
Physics Based Model for Cryogenic Chilldown and Loading. Part I: Algorithm
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Smelyanskiy, Vadim N.; Brown, Barbara
2014-01-01
We report the progress in the development of the physics based model for cryogenic chilldown and loading. The chilldown and loading is model as fully separated non-equilibrium two-phase flow of cryogenic fluid thermally coupled to the pipe walls. The solution follow closely nearly-implicit and semi-implicit algorithms developed for autonomous control of thermal-hydraulic systems developed by Idaho National Laboratory. A special attention is paid to the treatment of instabilities. The model is applied to the analysis of chilldown in rapid loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The numerical predictions are in reasonable agreement with the experimental time traces. The obtained results pave the way to the development of autonomous loading operation on the ground and space.
Multicycle Optimization of Advanced Gas-Cooled Reactor Loading Patterns Using Genetic Algorithms
Ziver, A. Kemal; Carter, Jonathan N.; Pain, Christopher C.; Oliveira, Cassiano R.E. de; Goddard, Antony J. H.; Overton, Richard S.
2003-02-15
A genetic algorithm (GA)-based optimizer (GAOPT) has been developed for in-core fuel management of advanced gas-cooled reactors (AGRs) at HINKLEY B and HARTLEPOOL, which employ on-load and off-load refueling, respectively. The optimizer has been linked to the reactor analysis code PANTHER for the automated evaluation of loading patterns in a two-dimensional geometry, which is collapsed from the three-dimensional reactor model. GAOPT uses a directed stochastic (Monte Carlo) algorithm to generate initial population members, within predetermined constraints, for use in GAs, which apply the standard genetic operators: selection by tournament, crossover, and mutation. The GAOPT is able to generate and optimize loading patterns for successive reactor cycles (multicycle) within acceptable CPU times even on single-processor systems. The algorithm allows radial shuffling of fuel assemblies in a multicycle refueling optimization, which is constructed to aid long-term core management planning decisions. This paper presents the application of the GA-based optimization to two AGR stations, which apply different in-core management operational rules. Results obtained from the testing of GAOPT are discussed.
Soft tissue balancing in varus total knee arthroplasty: an algorithmic approach.
Verdonk, Peter C M; Pernin, Jerome; Pinaroli, Alban; Ait Si Selmi, Tarik; Neyret, Philippe
2009-06-01
We present an algorithmic release approach to the varus knee, including a novel pie crust release technique of the superficial MCL, in 359 total knee arthroplasty patients and report the clinical and radiological outcome. Medio-lateral stability was evaluated as normal in 97% of group 0 (deep MCL), 95% of group 1 (pie crust superficial MCL) and 83% of group 2 (distal superficial MCL). The mean preoperative hip-knee angle was 174.0, 172.1, and 169.5 and was corrected postoperatively to 179.1, 179.2, and 177.6 for groups 0, 1, and 2, respectively. A satisfactory correction in the coronal plane was achieved in 82.9% of all-comers falling within the 180 degrees +/- 3 degrees interval. An algorithmic release approach can be beneficial for soft tissue balancing. In all patients, the deep medial collateral ligament should be released and otseophytes removed. The novel pie crust technique of the superficial MCL is safe, efficient and reliable, provided a medial release of 6-8 mm or less is required. The release of the superficial MCL on the distal tibia is advocated in severe varus knees. Preoperative coronal alignment is an important predictor for the release technique, but should be combined with other parameters such as reducibility of the deformity and the obtained gap asymmetry. PMID:19290507
Simultaneous optimization of the cavity heat load and trip rates in linacs using a genetic algorithm
NASA Astrophysics Data System (ADS)
Terzić, Balša; Hofler, Alicia S.; Reeves, Cody J.; Khan, Sabbir A.; Krafft, Geoffrey A.; Benesch, Jay; Freyberger, Arne; Ranjan, Desh
2014-10-01
In this paper, a genetic algorithm-based optimization is used to simultaneously minimize two competing objectives guiding the operation of the Jefferson Lab's Continuous Electron Beam Accelerator Facility linacs: cavity heat load and radio frequency cavity trip rates. The results represent a significant improvement to the standard linac energy management tool and thereby could lead to a more efficient Continuous Electron Beam Accelerator Facility configuration. This study also serves as a proof of principle of how a genetic algorithm can be used for optimizing other linac-based machines.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Fawley, William M.
2002-07-01
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Harteveld, Casper
At many occasions we are asked to achieve a “balance” in our lives: when it comes, for example, to work and food. Balancing is crucial in game design as well as many have pointed out. In games with a meaningful purpose, however, balancing is remarkably different. It involves the balancing of three different worlds, the worlds of Reality, Meaning, and Play. From the experience of designing Levee Patroller, I observed that different types of tensions can come into existence that require balancing. It is possible to conceive of within-worlds dilemmas, between-worlds dilemmas, and trilemmas. The first, the within-world dilemmas, only take place within one of the worlds. We can think, for example, of a user interface problem which just relates to the world of Play. The second, the between-worlds dilemmas, have to do with a tension in which two worlds are predominantly involved. Choosing between a cartoon or a realistic style concerns, for instance, a tension between Reality and Play. Finally, the trilemmas are those in which all three worlds play an important role. For each of the types of tensions, I will give in this level a concrete example from the development of Levee Patroller. Although these examples come from just one game, I think the examples can be exemplary for other game development projects as they may represent stereotypical tensions. Therefore, to achieve harmony in any of these forthcoming games, it is worthwhile to study the struggles we had to deal with.
NASA Astrophysics Data System (ADS)
Kizilkaya, Elif A.; Gupta, Surendra M.
2005-11-01
In this paper, we compare the impact of different disassembly line balancing (DLB) algorithms on the performance of our recently introduced Dynamic Kanban System for Disassembly Line (DKSDL) to accommodate the vagaries of uncertainties associated with disassembly and remanufacturing processing. We consider a case study to illustrate the impact of various DLB algorithms on the DKSDL. The approach to the solution, scenario settings, results and the discussions of the results are included.
Access Load Balancing with Analogy to Thermal Diffusion for Dynamic P2P File-Sharing Environments
NASA Astrophysics Data System (ADS)
Takaoka, Masanori; Uchida, Masato; Ohnishi, Kei; Oie, Yuji
In this paper, we propose a file replication method to achieve load balancing in terms of write access to storage device (“write storage access load balancing” for short) in unstructured peer-to-peer (P2P) file-sharing networks in which the popularity trend of queried files varies dynamically. The proposed method uses a write storage access ratio as a load balance index value in order to stabilize dynamic P2P file-sharing environments adaptively. In the proposed method, each peer autonomously controls the file replication ratio, which is defined as a probability to create the replica of the file in order to uniform write storage access loads in the similar way to thermal diffusion phenomena. Theoretical analysis results show that the behavior of the proposed method actually has an analogy to a thermal diffusion equation. In addition, simulation results reveal that the proposed method has an ability to realize write storage access load balancing in the dynamic P2P file-sharing environments.
Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes
Garrett, W.R.
1984-03-06
A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller Belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. A prototype includes of this a bellows seal instead of the floating seal at the upper end of the tool, and a bellows in the side of the lubricant chamber provides volume compensation. A second lubricant chamber is provided below the pressure seal, the lower end of the second chamber being closed by a bellows seal and a further bellows in the side of the second chamber providing volume compensation. Modifications provide hydraulic jars.
NASA Astrophysics Data System (ADS)
Pitakaso, Rapeepan; Sethanan, Kanchana
2016-02-01
This article proposes the differential evolution algorithm (DE) and the modified differential evolution algorithm (DE-C) to solve a simple assembly line balancing problem type 1 (SALBP-1) and SALBP-1 when the maximum number of machine types in a workstation is considered (SALBP-1M). The proposed algorithms are tested and compared with existing effective heuristics using various sets of test instances found in the literature. The computational results show that the proposed heuristics is one of the best methods, compared with the other approaches.
Cain, Stephen M; McGinnis, Ryan S; Davidson, Steven P; Vitali, Rachel V; Perkins, Noel C; McLean, Scott G
2016-01-01
We utilize an array of wireless inertial measurement units (IMUs) to measure the movements of subjects (n=30) traversing an outdoor balance beam (zigzag and sloping) as quickly as possible both with and without load (20.5kg). Our objectives are: (1) to use IMU array data to calculate metrics that quantify performance (speed and stability) and (2) to investigate the effects of load on performance. We hypothesize that added load significantly decreases subject speed yet results in increased stability of subject movements. We propose and evaluate five performance metrics: (1) time to cross beam (less time=more speed), (2) percentage of total time spent in double support (more double support time=more stable), (3) stride duration (longer stride duration=more stable), (4) ratio of sacrum M-L to A-P acceleration (lower ratio=less lateral balance corrections=more stable), and (5) M-L torso range of motion (smaller range of motion=less balance corrections=more stable). We find that the total time to cross the beam increases with load (t=4.85, p<0.001). Stability metrics also change significantly with load, all indicating increased stability. In particular, double support time increases (t=6.04, p<0.001), stride duration increases (t=3.436, p=0.002), the ratio of sacrum acceleration RMS decreases (t=-5.56, p<0.001), and the M-L torso lean range of motion decreases (t=-2.82, p=0.009). Overall, the IMU array successfully measures subject movement and gait parameters that reveal the trade-off between speed and stability in this highly dynamic balance task. PMID:26669954
Parallel load balancing strategy for Volume-of-Fluid methods on 3-D unstructured meshes
NASA Astrophysics Data System (ADS)
Jofre, Lluís; Borrell, Ricard; Lehmkuhl, Oriol; Oliva, Assensi
2015-02-01
Volume-of-Fluid (VOF) is one of the methods of choice to reproduce the interface motion in the simulation of multi-fluid flows. One of its main strengths is its accuracy in capturing sharp interface geometries, although requiring for it a number of geometric calculations. Under these circumstances, achieving parallel performance on current supercomputers is a must. The main obstacle for the parallelization is that the computing costs are concentrated only in the discrete elements that lie on the interface between fluids. Consequently, if the interface is not homogeneously distributed throughout the domain, standard domain decomposition (DD) strategies lead to imbalanced workload distributions. In this paper, we present a new parallelization strategy for general unstructured VOF solvers, based on a dynamic load balancing process complementary to the underlying DD. Its parallel efficiency has been analyzed and compared to the DD one using up to 1024 CPU-cores on an Intel SandyBridge based supercomputer. The results obtained on the solution of several artificially generated test cases show a speedup of up to ∼12× with respect to the standard DD, depending on the interface size, the initial distribution and the number of parallel processes engaged. Moreover, the new parallelization strategy presented is of general purpose, therefore, it could be used to parallelize any VOF solver without requiring changes on the coupled flow solver. Finally, note that although designed for the VOF method, our approach could be easily adapted to other interface-capturing methods, such as the Level-Set, which may present similar workload imbalances.
NASA Astrophysics Data System (ADS)
Esin, S. B.; Trifonov, N. N.; Sukhorukov, Yu. G.; Yurchenko, A. Yu.; Grigor'eva, E. B.; Snegin, I. P.; Zhivykh, D. A.; Medvedkin, A. V.; Ryabich, V. A.
2015-09-01
More than 30 power units of thermal power stations, based on the nondeaerating heat balance diagram, successfully operate in the former Soviet Union. Most of them are power units with a power of 300 MW, equipped with HTGZ and LMZ turbines. They operate according to a variable electric load curve characterized by deep reductions when undergoing night minimums. Additional extension of the range of power unit adjustment makes it possible to maintain the dispatch load curve and obtain profit for the electric power plant. The objective of this research is to carry out estimated and experimental processing of the operating regimes of the regeneration system of steam-turbine plants within the extended adjustment range and under the conditions when the constraints on the regeneration system and its equipment are removed. Constraints concerning the heat balance diagram that reduce the power unit efficiency when extending the adjustment range have been considered. Test results are presented for the nondeaerating heat balance diagram with the HTGZ turbine. Turbine pump and feed electric pump operation was studied at a power unit load of 120-300 MW. The reliability of feed pump operation is confirmed by a stable vibratory condition and the absence of cavitation noise and vibration at a frequency that characterizes the cavitation condition, as well as by oil temperature maintenance after bearings within normal limits. Cavitation performance of pumps in the studied range of their operation has been determined. Technical solutions are proposed on providing a profitable and stable operation of regeneration systems when extending the range of adjustment of power unit load. A nondeaerating diagram of high-pressure preheater (HPP) condensate discharge to the mixer. A regeneration system has been developed and studied on the operating power unit fitted with a deaeratorless thermal circuit of the system for removing the high-pressure preheater heating steam condensate to the mixer
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
Finding the ILL Load Balance: Quality and Quantity in the 1990's.
ERIC Educational Resources Information Center
Harer, John B.; Robbins, Rachel
1997-01-01
The desire for load leveling in interlibrary loans arises from a concern for fairness most often expressed by net lending institutions, libraries that lend more than they borrow. This article examines interlibrary loans; strategies for load leveling; and "best lending partner," a total quality management model for load leveling. (PEN)
A remote sensing surface energy balance algorithm for land (SEBAL).. Part 2: Validation
NASA Astrophysics Data System (ADS)
Bastiaanssen, W. G. M.; Pelgrum, H.; Wang, J.; Ma, Y.; Moreno, J. F.; Roerink, G. J.; van der Wal, T.
1998-12-01
The surface fluxes obtained with the Surface Energy Balance Algorithm for Land (SEBAL), using remote sensing information and limited input data from the field were validated with data available from the large-scale field experiments EFEDA (Spain), HAPEX-Sahel (Niger) and HEIFE (China). In 85% of the cases where field scale surface flux ratios were compared with SEBAL-based surface flux ratios, the differences were within the range of instrumental inaccuracies. Without any calibration procedure, the root mean square error of the evaporative fraction Λ (latent heat flux/net available radiation) for footprints of a few hundred metres varied from Λ RMSE=0.10 to 0.20. Aggregation of several footprints to a length scale of a few kilometres reduced the overall error to five percent. Fluxes measured by aircraft during EFEDA were used to study the correctness of remote sensed watershed fluxes (1 000 000 ha): The overall difference in evaporative fraction was negligible. For the Sahelian landscape in Niger, observed differences were larger (15%), which could be attributed to the rapid moisture depletion of the coarse textured soils between the moment of image acquisition (18 September 1992) and the moment of in situ flux analysis (17 September 1992). For HEIFE, the average difference in SEBAL estimated and ground verified surface fluxes was 23 W m -2, which, considering that surface fluxes were not used for calibration, is encouraging. SEBAL estimates of evaporation from the subsealevel Qattara Depression in Egypt (2 000 000 ha) were consistent with the numerically predicted discharge from the groundwater system. In Egypt's Nile Delta, the evaporation from a distributed field scale water balance model at a 700 000 ha irrigated agricultural region led to difference of 5% with daily evaporative fluxes obtained from SEBAL. It is concluded that, for all study areas in arid zones, the errors average out if a larger number of pixels is considered. Part 1 of this paper
Soewono, C. N.; Takaki, N.
2012-07-01
In this work genetic algorithm was proposed to solve fuel loading pattern optimization problem in thorium fueled heavy water reactor. The objective function of optimization was to maximize the conversion ratio and minimize power peaking factor. Those objectives were simultaneously optimized using non-dominated Pareto-based population ranking optimal method. Members of non-dominated population were assigned selection probabilities based on their rankings in a manner similar to Baker's single criterion ranking selection procedure. A selected non-dominated member was bred through simple mutation or one-point crossover process to produce a new member. The genetic algorithm program was developed in FORTRAN 90 while neutronic calculation and analysis was done by COREBN code, a module of core burn-up calculation for SRAC. (authors)
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
DESIGN NOTE: A low interaction two-axis wind tunnel force balance designed for large off-axis loads
NASA Astrophysics Data System (ADS)
Ostafichuk, Peter M.; Green, Sheldon I.
2002-10-01
A novel two-axis wind tunnel force balance using air bushings for off-axis load compensation has been developed. The design offers a compact, robust, and versatile option for precisely measuring horizontal force components irrespective of vertical and moment loads. Two independent stages of cylindrical bushings support large moments and vertical force; there is low interaction due to the minimal friction along the horizontal measurement axes. The current design measures drag and side forces up to 70 N and can safely operate in the presence of vertical loads as large as 2200 N and moment loads up to 425, 750, and 425 N m in roll, pitch, and yaw, respectively. Eleven drag axis calibration trials were conducted with a variety of applied vertical forces and pitching moments. The individual linear calibration slopes for the trials agreed to within 0.18% and the largest residual from all calibrations was 0.38% of full scale. As the residuals were found to obey a normal distribution, with 99% certainty the expected drag resolution of the device is better than 0.30% of full scale, independent of off-axis loads.
NASA Astrophysics Data System (ADS)
Williams, Mark R.; King, Kevin W.; Macrae, Merrin L.; Ford, William; Van Esbroeck, Chris; Brunke, Richard I.; English, Michael C.; Schiff, Sherry L.
2015-11-01
Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of nutrient load estimates in large, naturally drained watersheds, few studies have focused on tile-drained fields and small tile-drained headwater watersheds. The objective of this study was to quantify uncertainty in annual dissolved reactive phosphorus (DRP) and nitrate-nitrogen (NO3-N) load estimates from four tile-drained fields and two small tile-drained headwater watersheds in Ohio, USA and Ontario, Canada. High temporal resolution datasets of discharge (10-30 min) and nutrient concentration (2 h to 1 d) were collected over a 1-2 year period at each site and used to calculate a reference nutrient load. Monte Carlo simulations were used to subsample the measured data to assess the effects of sample frequency, calculation algorithm, and compositing strategy on the uncertainty of load estimates. Results showed that uncertainty in annual DRP and NO3-N load estimates was influenced by both the sampling interval and the load estimation algorithm. Uncertainty in annual nutrient load estimates increased with increasing sampling interval for all of the load estimation algorithms tested. Continuous discharge measurements and linear interpolation of nutrient concentrations yielded the least amount of uncertainty, but still tended to underestimate the reference load. Compositing strategies generally improved the precision of load estimates compared to discrete grab samples; however, they often reduced the accuracy. Based on the results of this study, we recommended that nutrient concentration be measured every 13-26 h for DRP and every 2.7-17.5 d for NO3-N in tile-drained fields and small tile-drained headwater watersheds to accurately (±10%) estimate annual loads.
Estimating sediment loads in an intra-Apennine catchments: balance between modeling and monitoring
NASA Astrophysics Data System (ADS)
Pelacani, Samanta; Cassi, Paola; Borselli, Lorenzo
2010-05-01
In this study we compare the results of a soil erosion model applied at watershed scale to the suspended sediment measured in the stream network affected by a motor way construction. A sediment delivery model is applied at watershed scale; the evaluation of sediment delivery is related to a connectivity fluxes index that describes the internal linkages between runoff and sediment sources in upper parts of catchments and the receiving sinks. An analysis of the fine suspended sediment transport and storage was conducted for a streams inlet of the Bilancino reservoir, a principal water supply of the city of Florence. The suspended sediment were collected from a section of river defined as a close systems using a time integrating suspended sediment sampling. The sediment deposited within the sampling traps was recovered after storm events and provide information of the overall contribution of the potential sediment sources. Hillslope gross erosion was assessed by a USLE-TYPE approach. A soil survey at 1:25.000 scale and a soil database was create to calculate, for each soil unit, the erodibility coefficient K using a new algorithm (Salvador Sanchis et al. 2007). Erosivity coefficient R was obtained applying geostatistical methods taking into account elevation and valley morphology. Furthermore, we evaluate a sediment delivery factor (SDR) for the entire watershed. This factor is used to correct the output of the USLE Type model. The innovative approach consist in a SDR factor variable in space and in time because it is related to a fluxes connectivity index IC (Borselli et al. 2008) based on the distribution of land use and topographic features. The aim of this study is to understand how the model simulates the real processes that intervene in the watershed and subsequently to calibrate the model with the result obtained from the monitoring of suspend sediment in the streams. From first results, it appears that human activities by highway construction, have resulted in
NASA Astrophysics Data System (ADS)
Muraleedharan, Rajani
2011-06-01
The future of metering networks requires adaptation of different sensor technology while reducing energy exploitation. In this paper, a routing protocol with the ability to adapt and communicate reliably over varied IEEE standards is proposed. Due to sensor's resource constraints, such as memory, energy, processing power an algorithm that balances resources without compromising performance is preferred. The proposed A-PEARL protocol is tested under harsh simulated scenarios such as sensor failure and fading conditions. The inherent features of A-PEARL protocol such as data aggregation, fusion and channel hopping enables minimal resource consumption and secure communication.
NASA Astrophysics Data System (ADS)
Tufford, D. L.; Samadi, S.; Carbone, G. J.
2013-12-01
Recent studies have highlighted the potential challenges in US southeastern watersheds from climate variability. There may be shifts in water balance due to complexity of the flow generation processes that determine how water is partitioned in these landscapes. The main objective of this study was to capture the feedback relationships among the water balance components using the Soil & Water Assessment Tool (SWAT) watershed-scale streamflow model linked with the Sequential Uncertainty FItting (SUFI-2) and Particle Swarm Optimization (PSO) parameter uncertainty algorithms in the Waccamaw River watershed, a low-gradient forested watershed on the Coastal Plain of the southeastern United States. Streamflow water balance uncertainty analysis suggested close correspondence of the model with the physical behavior and system dynamics during different hydroclimatological periods in the 2003-2007 calibration interval. SUFI-2 water balance analysis revealed that surface runoff, ground water, and lateral flow contributed 22.2%, 3.9% and 0.4% of the total water yield during simulation period while PSO analysis indicated 16.7%, 13.2% and 0.3% of their contributions respectively. Both uncertainty methods found that 71.1% of the total rainfall was lost by evapotranspiration during the simulation interval. The total water yields using both algorithms were over predicted by up to 14.0% of the annual rainfall inputs during dry period (2007) which was related to extra contribution of shallow aquifer flow to the river system. Both algorithms also specified that surface flow and ground water runoff dominated water balance during October and December in overall prediction interval respectively. Moreover, evaluating parameter uncertainty and error indicated that the distribution of prediction uncertainty was least in the wet year (2006) and was most towards the end of dry period particularly within alluvial riparian floodplains. Water balance estimation with uncertainty quantification can
NASA Technical Reports Server (NTRS)
Woods, Claudia M.; Brewe, David E.
1988-01-01
A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.
NASA Astrophysics Data System (ADS)
Koloch, Grzegorz; Kaminski, Bogumil
2010-10-01
In the paper we examine a modification of the classical Vehicle Routing Problem (VRP) in which shapes of transported cargo are accounted for. This problem, known as a three-dimensional VRP with loading constraints (3D-VRP), is appropriate when transported commodities are not perfectly divisible, but they have fixed and heterogeneous dimensions. In the paper restrictions on allowable cargo positionings are also considered. These restrictions are derived from business practice and they extended the baseline 3D-VRP formulation as considered by Koloch and Kaminski (2010). In particular, we investigate how additional restrictions influence relative performance of two proposed optimization algorithms: the nested and the joint one. Performance of both methods is compared on artificial problems and on a big-scale real life case study.
NASA Technical Reports Server (NTRS)
Woods, C. M.; Brewe, D. E.
1989-01-01
A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.
Multi-Objective Optimization of Heat Load and Run Time for CEBAF Linacs Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Reeves, Cody; Terzic, Balsa; Hofler, Alicia
2014-09-01
The Continuous Electron Beam Accelerator Facility (CEBAF) consists of two linear accelerators (Linacs) connected by arcs. Within each Linac, there are 200 niobium cavities that use superconducting radio frequency (SRF) to accelerate electrons. The gradients for the cavities are selected to optimize two competing objectives: heat load (the energy required to cool the cavities) and trip rate (how often the beam turns off within an hour). This results in a multidimensional, multi-objective, nonlinear system of equations that is not readily solved by analytical methods. This study improved a genetic algorithm (GA), which applies the concept of natural selection. The primary focus was making this GA more efficient to allow for more cost-effective solutions in the same amount of computation time. Two methods used were constraining the maximum value of the ob-jectives and also utilizing previously simulated solutions as the initial generation. A third method of interest involved refining the GA by combining the two objectives into a single weighted-sum objective, which collapses the set of optimal solutions into a single point. By combining these methods, the GA can be made 128 times as effective, reducing computation time from 30 min to 12 sec. This is crucial for when a cavity must be turned off, a new solution needs to be computed quickly. This work is of particular interest since it provides an efficient algorithm that can be easily adapted to any Linac facility.
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
Zimmermann, Frauke; Schwenninger, Christoph; Nolten, Ulrich; Firmbach, Franz Peter; Elfring, Robert; Radermacher, Klaus
2012-08-01
Preservation and recovery of the mechanical leg axis as well as good rotational alignment of the prosthesis components and well-balanced ligaments are essential for the longevity of total knee arthroplasty (TKA). In the framework of the OrthoMIT project, the genALIGN system, a new navigated implantation approach based on intra-operative force-torque measurements, has been developed. With this system, optical or magnetic position tracking as well as any fixation of invasive rigid bodies are no longer necessary. For the alignment of the femoral component along the mechanical axis, a sensor-integrated instrument measures the torques resulting from the deviation between the instrument's axis and the mechanical axis under manually applied axial compression load. When both axes are coaxial, the resulting torques equal zero, and the tool axis can be fixed with respect to the bone. For ligament balancing and rotational alignment of the femoral component, the genALIGN system comprises a sensor-integrated tibial trial inlay measuring the amplitude and application points of the forces transferred between femur and tibia. Hereby, the impact of ligament tensions on knee joint loads can be determined over the whole range of motion. First studies with the genALIGN system, including a comparison with an imageless navigation system, show the feasibility of the concept. PMID:22868781
Chen, Yousu; Huang, Zhenyu; Rice, Mark J.
2012-12-27
Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques hold the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.
NASA Astrophysics Data System (ADS)
Xie, H.; Hendrickx, J.; Kurc, S.; Small, E.
2002-12-01
Evapotranspiration (ET) is one of the most important components of the water balance, but also one of the most difficult to measure. Field techniques such as soil water balances and Bowen ratio or eddy covariance techniques are local, ranging from point to field scale. SEBAL (Surface Energy Balance Algorithm for Land) is an image-processing model that calculates ET and other energy exchanges at the earth's surface. SEBAL uses satellite image data (TM/ETM+, MODIS, AVHRR, ASTER, and so on) measuring visible, near-infrared, and thermal infrared radiation. SEBAL algorithms predict a complete radiation and energy balance for the surface along with fluxes of sensible heat and aerodynamic surface roughness (Bastiaanssen et al, 1998; and Allen et al. 2001). We are constructing a GIS based database that includes spatially-distributed estimates of ET from remote-sensed data at a resolution of about 30 m. The SEBAL code will be optimized for this region via comparison of surface based observations of ET, reference ET (from windspeed, solar radiation, humidity, air temperature, and rainfall records), surface temperature, albedo, and so on. The observed data is collected at a series of tower in the middle Rio Grande Basin. The satellite image provides the instantaneous ET (ET_inst) only. Therefore, estimating 24 hour ET (ET_24) requires some assumptions. Two of these assumptions, which are (1) by assuming the instantaneous evaporative fraction (EF) is equal to the 24-hour averaged value, and (2) by assuming the instantaneous ETrF (same as `crop coefficient', and equal to instantaneous ET divided by instantaneous reference ET) is equal to the 24 hour averaged value, will be evaluated for the study area. Seasonal ET will be estimated by expanding the 24-hour ET proportionally to a reference ET that is derived from weather data. References: Bastiaanssen,W.G.M., M.Menenti, R.A. Feddes, and A.A.M. Holtslag, 1998, A remote sensing surface energy balance algorithm for land (SEBAL): 1
NASA Astrophysics Data System (ADS)
Chen, M.; Senay, G. B.; Verdin, J. P.; Rowland, J.
2014-12-01
Current regional to global and daily to annual Evapotranspiration ( ET) estimation mainly relies on surface energy balance (SEB) ET models or statistical empirical methods driven by remote sensing data and various meteorology databases. However, these ET models face challenging issues—large uncertainties from inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at globally available FLUXNET tower sites provide a feasible opportunity to assess the ET modelling uncertainties. In this study, we focused on uncertainty analysis on an operational simplified surface energy balance (SSEBop) algorithm for ET estimation at multiple Ameriflux tower sites with diverse land cover characteristics and climatic conditions. The input land surface temperature (LST) data of the algorithm were adopted from the 8-day composite1-km Moderate Resolution Imaging Spectroradiometer (MODIS) land surface temperature. The other input data were taken from the Ameriflux database. Results of statistical analysis indicated that uncertainties or random errors from input variables and parameters of SSEBop led to daily and seasonal ET estimates with relative errors around 20% across multiple flux tower sites distributed across different biomes. This uncertainty of SSEBop lies in the error range of 20-30% of similar SEB-based ET algorithms, such as, Surface Energy Balance System and Surface Energy Balance Algorithm for Land. The R2 between daily and seasonal ET estimates by SSEBop and ET eddy covariance measurements at multiple Ameriflux tower sites exceed 0.7, and even up to 0.9 for croplands, grasslands, and forests, suggesting systematic error or bias of the SSEBop is acceptable. In summary, the uncertainty assessment verifies that the SSEBop is a reliable method for wide-area ET calculation and especially useful for detecting drought years and relative drought severity for agricultural production
Remote sensing based energy balance algorithms for mapping ET: Current status and future challenges
Technology Transfer Automated Retrieval System (TEKTRAN)
Evapotranspiration (ET) is an essential component of the water balance and a major consumptive use of irrigation water and precipitation on crop land. Remote sensing based agrometeorological models are presently most suited for estimating crop water use at both field and regional scales. Numerous ET...
Algorithm for Bottom Charge based on Load-duration Curve of Plug-in Hybrid Electric Vehicles
NASA Astrophysics Data System (ADS)
Takagi, Masaaki; Iwafune, Yumiko; Yamamoto, Hiromi; Yamaji, Kenji; Okano, Kunihiko; Hiwatari, Ryouji; Ikeya, Tomohiko
In the transport sector, Plug-in Hybrid Electric Vehicle (PHEV) is being developed as an environmentally friendly vehicle. PHEV is a kind of hybrid electric vehicle, which can be charged from power grid. Therefore, when analyzing reduction effect of CO2 emission by PHEVs, we need to count the emission from the power sector. In addition, the emission from the power sector is greatly influenced by charge pattern, i.e. timing of charge. For example, we can realize the load leveling by bottom charge, which charges late at night. If nuclear power plants were introduced by load leveling, we could expect substantial CO2 reduction. This study proposes an algorithm for bottom charge based on load-duration curve of charging. By adjusting the amplitude of charging power, we can bring the shape of curve close to that of ideal bottom charge. We evaluated the algorithm by using optimal generation planning model. The evaluation index is a difference between Target case, in which PHEVs ideally charge to raise the bottom demand, and Proposal case, in which PHEVs charge using the proposed algorithm. Annual CO2 emissions of Target case and Proposal case are 20.0% and 17.5% less than that of Reference case. Percentage of the reduction effect of Proposal case to that of Target case results in 87.5%. These results show that the proposed algorithm is effective in bottom-up of daily load curve.
NASA Astrophysics Data System (ADS)
Dowdell, David C.; Matthews, G. Peter; Wells, Ian
Two globally averaged mass balance models have been developed to investigate the sensitivity and future level of atmospheric chlorine and bromine as a result of the emission of 14 chloro- and 3 bromo-carbons. The models use production, growth, lifetime and concentration data for each of the halocarbons and divide the production into one of eight uses, these being aerosol propellants, cleaning agents, blowing agents in open and closed cell foams, non-hermetic and hermetic refrigeration, fire retardants and a residual "other" category. Each use category has an associated emission profile which is built into the models to take into account the proportion of halocarbon retained in equipment for a characteristic period of time before its release. Under the Montreal Protocol 3 requirements, a peak chlorine loading of 3.8 ppb is attained in 1994, which does not reduce to 2.0 ppb (the approximate level of atmospheric chlorine when the ozone hole formed) until 2053. The peak bromine loading is 22 ppt, also in 1994, which decays to 12 ppt by the end of next century. The models have been used to (i) compare the effectiveness of Montreal Protocols 1, 2 and 3 in removing chlorine from the atmosphere, (ii) assess the influence of the delayed emission assumptions used in these models compared to immediate emission assumptions used in previous models, (iii) assess the relative effect on the chlorine loading of a tightening of the Montreal Protocol 3 restrictions, and (iv) calculate the influence of chlorine and bromine chemistry as well as the faster phase out of man-made methyl bromide on the bromine loading.
Increasing the precision and accuracy of top-loading balances: application of experimental design.
Bzik, T J; Henderson, P B; Hobbs, J P
1998-01-01
The traditional method of estimating the weight of multiple objects is to obtain the weight of each object individually. We demonstrate that the precision and accuracy of these estimates can be improved by using a weighing scheme in which multiple objects are simultaneously on the balance. The resulting system of linear equations is solved to yield the weight estimates for the objects. Precision and accuracy improvements can be made by using a weighing scheme without requiring any more weighings than the number of objects when a total of at least six objects are to be weighed. It is also necessary that multiple objects can be weighed with about the same precision as that obtained with a single object, and the scale bias remains relatively constant over the set of weighings. Simulated and empirical examples are given for a system of eight objects in which up to five objects can be weighed simultaneously. A modified Plackett-Burman weighing scheme yields a 25% improvement in precision over the traditional method and implicitly removes the scale bias from seven of the eight objects. Applications of this novel use of experimental design techniques are shown to have potential commercial importance for quality control methods that rely on the mass change rate of an object. PMID:21644600
NASA Technical Reports Server (NTRS)
Brooks, D. R.; Harrison, E. F.; Minnis, P.; Suttles, J. T.; Kandel, R. S.
1986-01-01
A brief description is given of how temporal and spatial variability in the earth's radiative behavior influences the goals of satellite radiation monitoring systems and how some previous systems have addressed the existing problems. Then, results of some simulations of radiation budget monitoring missions are presented. These studies led to the design of the Earth Radiation Budget Experiment (ERBE). A description is given of the temporal and spatial averaging algorithms developed for the ERBE data analysis. These algorithms are intended primarily to produce monthly averages of the net radiant exitance on regional, zonal, and global scales and to provide insight into the regional diurnal variability of radiative parameters such as albedo and long-wave radiant exitance. The algorithms are applied to scanner and nonscanner data for up to three satellites. Modeling of dialy shortwave albedo and radiant exitance with satellite samling that is insufficient to fully account for changing meteorology is discussed in detail. Studies performed during the ERBE mission and software design are reviewed. These studies provide quantitative estimates of the effects of temporally sparse and biased sampling on inferred diurnal and regional radiative parameters. Other topics covered include long-wave diurnal modeling, extraction of a regional monthly net clear-sky radiation budget, the statistical significance of observed diurnal variability, quality control of the analysis, and proposals for validating the results of ERBE time and space averaging.
NASA Astrophysics Data System (ADS)
Bhattarai, Nishan
The flow of water and energy fluxes at the Earth's surface and within the climate system is difficult to quantify. Recent advances in remote sensing technologies have provided scientists with a useful means to improve characterization of these complex processes. However, many challenges remain that limit our ability to optimize remote sensing data in determining evapotranspiration (ET) and energy fluxes. For example, periodic cloud cover limits the operational use of remotely sensed data from passive sensors in monitoring seasonal fluxes. Additionally, there are many remote sensing-based single-source surface energy balance (SEB) models, but no clear guidance on which one to use in a particular application. Two widely used models---surface energy balance algorithm for land (SEBAL) and mapping ET at high resolution with internalized calibration (METRIC)---need substantial human-intervention that limits their applicability in broad-scale studies. This dissertation addressed some of these challenges by proposing novel ways to optimize available resources within the SEB-based ET modeling framework. A simple regression-based Landsat-Moderate Resolution Imaging Spectroradiometer (MODIS) fusion model was developed to integrate Landsat spatial and MODIS temporal characteristics in calculating ET. The fusion model produced reliable estimates of seasonal ET at moderate spatial resolution while mitigating the impact that cloud cover can have on image availability. The dissertation also evaluated five commonly used remote sensing-based single-source SEB models and found the surface energy balance system (SEBS) may be the best overall model for use in humid subtropical climates. The study also determined that model accuracy varies with land cover type, for example, all models worked well for wet marsh conditions, but the SEBAL and simplified surface energy balance index (S-SEBI) models worked better than the alternatives for grass cover. A new automated approach based on
Lebel, R Marc; Menon, Ravi S; Bowen, Chris V
2006-03-01
Magnetic resonance microscopy using magnetically labeled cells is an emerging discipline offering the potential for non-destructive studies targeting numerous cellular events in medical research. The present work develops a technique to quantify superparamagnetic iron-oxide (SPIO) loaded cells using fully balanced steady state free precession (b-SSFP) imaging. An analytic model based on phase cancellation was derived for a single particle and extended to predict mono-exponential decay versus echo time in the presence of multiple randomly distributed particles. Numerical models verified phase incoherence as the dominant contrast mechanism and evaluated the model using a full range of tissue decay rates, repetition times, and flip angles. Numerical simulations indicated a relaxation rate enhancement (DeltaR(2b)=0.412 gamma . LMD) proportional to LMD, the local magnetic dose (the additional sample magnetization due to the SPIO particles), a quantity related to the concentration of contrast agent. A phantom model of SPIO loaded cells showed excellent agreement with simulations, demonstrated comparable sensitivity to gradient echo DeltaR(*) (2) enhancements, and 14 times the sensitivity of spin echo DeltaR(2) measurements. We believe this model can be used to facilitate the generation of quantitative maps of targeted cell populations. PMID:16450353
Effects of nutrient loading on the carbon balance of coastal wetland sediments
Morris, J.T.; Bradley, P.M.
1999-01-01
Results of a 12-yr study in an oligotrophic South Carolina salt marsh demonstrate that soil respiration increased by 795 g C m-2 yr-1 and that carbon inventories decreased in sediments fertilized with nitrogen and phosphorus. Fertilized plots became net sources of carbon to the atmosphere, and sediment respiration continues in these plots at an accelerated pace. After 12 yr of treatment, soil macroorganic matter in the top 5 cm of sediment was 475 g C m-2 lower in fertilized plots than in controls, which is equivalent to a constant loss rate of 40 g C m-2 yr-1. It is not known whether soil carbon in fertilized plots has reached a new equilibrium or continues to decline. The increase in soil respiration in the fertilized plots was far greater than the loss of sediment organic matter, which indicates that the increase in soil respiration was largely due to an increase in primary production. Sediment respiration in laboratory incubations also demonstrated positive effects of nutrients. Thus, the results indicate that increased nutrient loading of oligotrophic wetlands can lead to an increased rate of sediment carbon turnover and a net loss of carbon from sediments.
Francois, Marianne M; Carlson, Neil N
2010-01-01
Understanding the complex interaction of droplet dynamics with mass transfer and chemical reactions is of fundamental importance in liquid-liquid extraction. High-fidelity numerical simulation of droplet dynamics with interfacial mass transfer is particularly challenging because the position of the interface between the fluids and the interface physics need to be predicted as part of the solution of the flow equations. In addition, the discontinuity in fluid density, viscosity and species concentration at the interface present additional numerical challenges. In this work, we extend our balanced-force volume-tracking algorithm for modeling surface tension force (Francois et al., 2006) and we propose a global embedded interface formulation to model the interfacial conditions of an interface in thermodynamic equilibrium. To validate our formulation, we perform simulations of pure diffusion problems in one- and two-dimensions. Then we present two and three-dimensional simulations of a single droplet dynamics rising by buoyancy with mass transfer.
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N. B.
2010-07-01
Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial scales. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at varying temporal and spatial scales under complex terrain. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can fully account for the dynamic impacts of complex terrain and changing land cover in concert with some varying kinetic parameters (i.e., roughness and zero-plane displacement) over time. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N.-B.
2011-01-01
Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial coverage in the study areas. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at different temporal and spatial scales under heterogeneous terrain with varying elevations, slopes and aspects. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can account for the dynamic impacts of heterogeneous terrain and changing land cover with some varying kinetic parameters (i.e., roughness and zero-plane displacement). Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Vechiato, F M V; Rivas, P M S; Ruginsk, S G; Borges, B C; Elias, L L K; Antunes-Rodrigues, J
2016-02-01
Hydroelectrolytic imbalances, such as saline load (SL), trigger behavioral and neuroendocrine responses, such as thirst, hypophagia, vasopressin (AVP) and oxytocin (OT) release and hypothalamus–pituitary–adrenal (HPA) axis activation. To investigate the participation of the type-1 cannabinoid receptor (CB1R) in these homeostatic mechanisms,male adult Wistar rats were subjected to SL (0.3MNaCl) for four days. SL induced not only increases in the water intake and plasma levels of AVP, OT and corticosterone, as previously described, but also increases in CB1R expression in the lamina terminalis, which integrates sensory afferents, aswell as in the hypothalamus, the main integrative and effector area controlling hydroelectrolytic homeostasis. A more detailed analysis revealed that CB1R-positive terminals are in close apposition with not only axons but also dendrites and secretory granules of magnocellular neurons, particularly vasopressinergic cells. In satiated and euhydrated animals, the intracerebroventricular administration of the CB1R selective agonist ACEA (0.1 μg/5 μL) promoted hyperphagia, but this treatment did not reverse the hyperosmolality-induced hypophagia in the SL group. Furthermore, ACEA pretreatment potentiated water intake in the SL animals during rehydration as well as enhanced the corticosterone release and prevented the increase in AVP and OT secretion induced by SL. The same parameters were not changed by ACEA in the animals whose daily food intake was matched to that of the SL group (Pair-Fed). These data indicate that CB1Rs modulate the hydroelectrolytic balance independently of the food intake during sustained hyperosmolality and hypovolemia. PMID:26497248
NASA Astrophysics Data System (ADS)
Tsuzuki, Satori; Aoki, Takayuki
2016-04-01
Numerical simulations for debris flows including a countless of objects is one of important topics in fluid dynamics and many engineering applications. Particle-based method is a promising approach to carry out the simulations for flows interacting with objects. In this paper, we propose an efficient method to realize a large-scale simulation for fluid-structure interaction by combining SPH (Smoothed Particle Hydrodynamics) method for fluid with DEM (Discrete Element Method) for objects on a multi-GPU system. By applying space filling curves to decomposition of the computational domain, we are able to contain the same number of particles in each decomposed domain. In our implementation, several techniques for particle counting and data movement have been introduced. Fragmentation of the memory used for particles happens during the time-integration and the frequency of de-fragmentation is examined by taking account for computational load balance and the communication cost between CPU and GPU. A link-list technique of the particle interaction is introduced to save the memory drastically. It is found that the sorting of particle data for the neighboring particle list using linked-list method improves the memory access greatly with a certain interval. The weak and strong scalabilities for a SPH simulation using 111 Million particles was measured from 4 GPUs to 512 GPUs for three types of space filling curves. A large-scale debris flow simulation of tsunami with 10,368 floating rubbles using 117 Million particles were successfully carried out with 256 GPUs on the TSUBAME 2.5 supercomputer at Tokyo Institute of Technology.
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.
1979-01-01
The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.
NASA Astrophysics Data System (ADS)
Rainieri, Carlo; Fabbrocino, Giovanni
2015-08-01
In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous
ERIC Educational Resources Information Center
Ngu, Bing Hiong; Phan, Huy Phuong
2016-01-01
We examined the use of balance and inverse methods in equation solving. The main difference between the balance and inverse methods lies in the operational line (e.g. +2 on both sides vs -2 becomes +2). Differential element interactivity favours the inverse method because the interaction between elements occurs on both sides of the equation for…