Adaptive Load-Balancing Algorithms Using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
In a distributed-computing environment, it is important to ensure that the processor workloads are adequately balanced. Among numerous load-balancing algorithms, a unique approach due to Dam and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three novel SBN-based load-balancing algorithms, and implement them on an SP2. A thorough experimental study with Poisson-distributed synthetic loads demonstrates that these algorithms are very effective in balancing system load while minimizing processor idle time. They also compare favorably with several other existing load-balancing techniques. Additional experiments performed with real data demonstrate that the SBN approach is effective in adaptive computational science and engineering applications where dynamic load balancing is extremely crucial.
Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.
Fast Optimal Load Balancing Algorithms for 1D Partitioning
Pinar, Ali; Aykanat, Cevdet
2002-12-09
One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.
LBR: Load Balancing Routing Algorithm for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Daabaj, Khaled; Dixon, Mike; Koziniec, Terry
2010-06-01
Homogeneous wireless sensor networks (WSNs) are organized using identical sensor nodes, but the nature of WSNs operations results in an imbalanced workload on gateway sensor nodes which may lead to a hot-spot or routing hole problem. The routing hole problem can be considered as a natural result of the tree-based routing schemes that are widely used in WSNs, where all nodes construct a multi-hop routing tree to a centralized root, e.g., a gateway or base station. For example, sensor nodes on the routing path and closer to the base station deplete their own energy faster than other nodes, or sensor nodes with the best link state to the base station are overloaded with traffic from the rest of the network and experience a faster energy depletion rate than their peers. Routing protocols for WSNs are reliability-oriented and their use of reliability metric to avoid unreliable links makes the routing scheme worse. However, none of these reliability oriented routing protocols explicitly uses load balancing in their routing schemes. Since improving network lifetime is a fundamental challenge of WSNs, we present, in this chapter, a novel, energy-wise, load balancing routing (LBR) algorithm that addresses load balancing in an energy efficient manner by maintaining a reliable set of parent nodes. This allows sensor nodes to quickly find a new parent upon parent loss due to the existing of node failure or energy hole. The proposed routing algorithm is tested using simulations and the results demonstrate that it outperforms the MultiHopLQI reliability based routing algorithm.
Dynamic load balance scheme for the DSMC algorithm
Li, Jin; Geng, Xiangren; Jiang, Dingwu; Chen, Jianqiang
2014-12-09
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, the total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.
A global plan policy for coherent co-operation in distributed dynamic load balancing algorithms
NASA Astrophysics Data System (ADS)
Kara, M.
1995-12-01
Distributed-controlled dynamic load balancing algorithms are known to have several advantages over centralized algorithms such as scalability, and fault tolerance. Distributed implies that the control is decentralized and that a copy of the algorithm (called a scheduler) is replicated on each host of the network. However, distributed control also contributes to the lack of global goals and lack of coherence. This paper presents a new algorithm called DGP (decentralized global plans) that addresses the problem of coherence and co-ordination in distributed dynamic load balancing algorithms. The DGP algorithm is based on a strategy called global plans (GP), and aims at maintaining all computational loads of a distributed system within a band called delta . The rationale for the design of DGP is to allow each scheduler to consider the actions of its peer schedulers. With this level of co-ordination, the schedulers can act more as a coherent team. This new approach first explicitly specifies a global goal and then designs a strategy around this global goal such that each scheduler (i) takes into account local decisions made by other schedulers; (ii) takes into account the effect of its local decisions on the overall system and (iii) ensures load balancing. An experimental evaluation of DGP with two other well known dynamic load balancing algorithms published in the literature shows that DGP performs consistently better. More significantly, the results indicate that the global plan approach provides a better framework for the design of distributed dynamic load balancing algorithms.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
A load-balance path selection algorithm in automatically swiched optical network (ASON)
NASA Astrophysics Data System (ADS)
Gao, Fei; Lu, Yueming; Ji, Yuefeng
2007-11-01
In this paper, a novel load-balance algorithm is proposed to provide an approach to optimized path selection in automatically swiched optical network (ASON). By using this algorithm, improved survivability and low congestion can be achieved. The static nature of current routing algorithms, such as OSPF or IS-IS, has made the situation worse since the traffic is concentrated on the "least-cost" paths which causes the congestion for some links while leaving other links lightly loaded. So, the key is to select suitable paths to balance the network load to optimize network resource utilization and traffic performance. We present a method to provide the capability to control traffic engineering so that the carriers can define their own strategies for optimizations and apply them to path selection for dynamic load balancing. With considering load distribution and topology information, capacity utilization factor is introduced into Dijkstra (shortest path selection) for path selection to achieve balancing traffic over network. Routing simulations have been done over mesh networks to compare the two different algorithms. With the simulation results, a conclusion can be made on the performance of different algorithms.
A parallel dynamic load balancing algorithm for 3-D adaptive unstructured grids
NASA Technical Reports Server (NTRS)
Vidwans, A.; Kallinderis, Y.; Venkatakrishnan, V.
1993-01-01
Adaptive local grid refinement and coarsening results in unequal distribution of workload among the processors of a parallel system. A novel method for balancing the load in cases of dynamically changing tetrahedral grids is developed. The approach employs local exchange of cells among processors in order to redistribute the load equally. An important part of the load balancing algorithm is the method employed by a processor to determine which cells within its subdomain are to be exchanged. Two such methods are presented and compared. The strategy for load balancing is based on the Divide-and-Conquer approach which leads to an efficient parallel algorithm. This method is implemented on a distributed-memory MIMD system.
Load Balancing Scientific Applications
Pearce, Olga Tkachyshyn
2014-12-01
The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one at the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.
A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
Multidimensional spectral load balancing
Hendrickson, B.; Leland, R.
1993-01-01
We describe an algorithm for the static load balancing of scientific computations that generalizes and improves upon spectral bisection. Through a novel use of multiple eigenvectors, our new spectral algorithm can divide a computation into 4 or 8 pieces at once. These multidimensional spectral partitioning algorithms generate balanced partitions that have lower communication overhead and are less expensive to compute than those produced by spectral bisection. In addition, they automatically work to minimize message contention on a hypercube or mesh architecture. These spectral partitions are further improved by a multidimensional generalization of the Kernighan-Lin graph partitioning algorithm. Results on several computational grids are given and compared with other popular methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng
2006-09-01
The paper considers the problem of establishing robust routes for multi-granularity connection requests in traffic-grooming WDM mesh networks and proposes a novel Valiant Load-Balanced robust routing scheme for the hose uncertain model. Our objective is to minimize the total network cost when assuring robust routing for all possible multi-granularity connection requests under the hose model. Since the optimization problem is recently shown to be NP-hard, two heuristic algorithms are proposed and compared. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimal hop first) is proposed. We evaluate MHF by Valiant Load-Balanced robust routing with the traditional traffic-grooming algorithm by computer simulation.
NASA Astrophysics Data System (ADS)
Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng
2013-07-01
High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
Choi, D.S.; Hasegawa, Jun; Kim, C.S.
1995-12-31
Network reconfiguration in distribution system is realized by changing the status of sectionalizing switches, and is usually done for loss reducing or for load balancing in the system. This paper presents a new method which applies a genetic algorithm for determining which sectionalizing switch to operate in order to solve the distribution system loss minimization reconfiguration problem. In addition, the proposed method introduces a new limited life feature for performing natural selection of individuals. Simulations were carried out in order to verify the effectiveness of the proposed method. These results showed that the proposed method is effective in dealing with the problems of homogeneity and genetic drift associated with the population in the initial state.
Multidimensional spectral load balancing
Hendrickson, Bruce A.; Leland, Robert W.
1996-12-24
A method of and apparatus for graph partitioning involving the use of a plurality of eigenvectors of the Laplacian matrix of the graph of the problem for which load balancing is desired. The invention is particularly useful for optimizing parallel computer processing of a problem and for minimizing total pathway lengths of integrated circuits in the design stage.
Dynamic load balancing of applications
Wheat, Stephen R.
1997-01-01
An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated.
Dynamic load balancing of applications
Wheat, S.R.
1997-05-13
An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers is disclosed. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated. 13 figs.
Libra: Scalable Load Balance Analysis
2009-09-16
Libra is a tool for scalable analysis of load balance data from all processes in a parallel application. Libra contains an instrumentation module that collects model data from parallel applications and a parallel compression mechanism that uses distributed wavelet transforms to gather load balance model data in a scalable fashion. Data is output to files, and these files can be viewed in a GUI tool by Libra users. The GUI tool associates particular load balance data with regions for code, emabling users to view the load balance properties of distributed "slices" of their application code.
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
Libra: Scalable Load Balance Analysis
2009-09-16
Libra is a tool for scalable analysis of load balance data from all processes in a parallel application. Libra contains an instrumentation module that collects model data from parallel applications and a parallel compression mechanism that uses distributed wavelet transforms to gather load balance model data in a scalable fashion. Data is output to files, and these files can be viewed in a GUI tool by Libra users. The GUI tool associates particular load balancemore » data with regions for code, emabling users to view the load balance properties of distributed "slices" of their application code.« less
Isorropia Partitioning and Load Balancing Package
2006-09-01
Isorropia is a partitioning and load balancing package which interfaces with the Zoltan library. Isorropia can accept input objects such as matrices and matrix-graphs, and repartition/redistribute them into a better data distribution on parallel computers. Isorropia is primarily an interface package, utilizing graph and hypergraph partitioning algorithms that are in the Zoltan library which is a third-party library to Tilinos.
Load Balancing Sequences of Unstructured Adaptive Grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1997-01-01
Mesh adaption is a powerful tool for efficient unstructured grid computations but causes load imbalance on multiprocessor systems. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. This paper makes several important additions to our previous work. First, a new remapping cost model is presented and empirically validated on an SP2. Next, our load balancing strategy is applied to sequences of dynamically adapted unstructured grids. Results indicate that our framework is effective on many processors for both steady and unsteady problems with several levels of adaption. Additionally, we demonstrate that a coarse starting mesh produces high quality load balancing, at a fraction of the cost required for a fine initial mesh. Finally, we show that the data remapping overhead can be significantly reduced by applying our heuristic processor reassignment algorithm.
Design of dynamic load-balancing tools for parallel applications
Devine, K.D.; Hendrickson, B.A.; Boman, E.G.; St. John, M.; Vaughan, C.T.
2000-01-03
The design of general-purpose dynamic load-balancing tools for parallel applications is more challenging than the design of static partitioning tools. Both algorithmic and software engineering issues arise. The authors have addressed many of these issues in the design of the Zoltan dynamic load-balancing library. Zoltan has an object-oriented interface that makes it easy to use and provides separation between the application and the load-balancing algorithms. It contains a suite of dynamic load-balancing algorithms, including both geometric and graph-based algorithms. Its design makes it valuable both as a partitioning tool for a variety of applications and as a research test-bed for new algorithmic development. In this paper, the authors describe Zoltan's design and demonstrate its use in an unstructured-mesh finite element application.
Static load balancing for CFD distributed simulations
Chronopoulos, A T; Grosu, D; Wissink, A; Benche, M
2001-01-26
The cost/performance ratio of networks of workstations has been constantly improving. This trend is expected to continue in the near future. The aggregate peak rate of such systems often matches or exceeds the peak rate offered by the fastest parallel computers. This has motivated research towards using a network of computers, interconnected via a fast network (cluster system) or a simple Local Area Network (LAN) (distributed system), for high performance concurrent computations. Some of the important research issues arise such as (1) Optimal problem partitioning and virtual interconnection topology mapping; (2) Optimal execution scheduling and load balancing. CFD codes have been efficiently implemented on homogeneous parallel systems in the past. In particular, the helicopter aerodynamics CFD code TURNS has been implemented with MPI on the IBM SP with parallel relaxation and Krylov iterative methods used in place of more traditional recursive algorithms to enhance performance. In this implementation the space domain is divided into equal subdomain which are mapped to the processors. We consider the implementation of TURNS on a LAN of heterogeneous workstations. In order to deal with the problem of load balancing due to the different processor speeds we propose a suboptimal algorithm of dividing the space domain into unequal subdomains and assign them to the different computers. The algorithm can apply to other CFD applications. We used our algorithm to schedule TURNS on a network of workstations and obtained significantly better results.
Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2014-01-01
An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.
Dynamics of load balancing with constraints
NASA Astrophysics Data System (ADS)
Suzuki, Hideyuki
2014-10-01
In this paper, we consider a centralized strategy for scheduling charging patterns of electrical vehicles and other batteries in power grids. We formulate it as a load balancing problem with constraints, which tries to distribute the charging loads both spatially and temporally. We show that a variant of herding system can be applied to load balancing.
Dynamics of load balancing with constraints
NASA Astrophysics Data System (ADS)
Suzuki, Hideyuki
2014-09-01
In this paper, we consider a centralized strategy for scheduling charging patterns of electrical vehicles and other batteries in power grids. We formulate it as a load balancing problem with constraints, which tries to distribute the charging loads both spatially and temporally. We show that a variant of herding system can be applied to load balancing.
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
A novel strategy for load balancing of distributed medical applications.
Logeswaran, Rajasvaran; Chen, Li-Choo
2012-04-01
Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions. PMID:20703702
Improving load balance with flexibly assignable tasks
Pinar, Ali; Hendrickson, Bruce
2003-09-09
In many applications of parallel computing, distribution ofthe data unambiguously implies distribution of work among processors. Butthere are exceptions where some tasks can be assigned to one of severalprocessors without altering the total volume of communication. In thispaper, we study the problem of exploiting this flexibility in assignmentof tasks to improve load balance. We first model the problem in terms ofnetwork flow and use combinatorial techniques for its solution. Ourparametric search algorithms use maximum flow algorithms for probing on acandidate optimal solution value. We describe two algorithms to solve theassignment problem with \\logW_T and vbar P vbar probe calls, w here W_Tand vbar P vbar, respectively, denote the total workload and number ofproce ssors. We also define augmenting paths and cuts for this problem,and show that anyalgorithm based on augmenting paths can be used to findan optimal solution for the task assignment problem. We then consideracontinuous version of the problem, and formulate it as a linearlyconstrained optimization problem, i.e., \\min\\|Ax\\|_\\infty,\\; {\\rms.t.}\\;Bx=d. To avoid solving an intractable \\infty-norm optimization problem,we show that in this case minimizing the 2-norm is sufficient to minimizethe \\infty-norm, which reduces the problem to the well-studiedlinearly-constrained least squares problem. The continuous version of theproblem has the advantage of being easily amenable to parallelization.Our experiments with molecular dynamics and overlapped domaindecomposition applications proved the effectiveness of our methods withsignificant improvements in load balance. We also discuss how ourtechniques can be enhanced for heterogeneous systems.
Load Balancing in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Zhu, Yingwu
In this chapter we start by addressing the importance and necessity of load balancing in structured P2P networks, due to three main reasons. First, structured P2P networks assume uniform peer capacities while peer capacities are heterogeneous in deployed P2P networks. Second, resorting to pseudo-uniformity of the hash function used to generate node IDs and data item keys leads to imbalanced overlay address space and item distribution. Lastly, placement of data items cannot be randomized in some applications (e.g., range searching). We then present an overview of load aggregation and dissemination techniques that are required by many load balancing algorithms. Two techniques are discussed including tree structure-based approach and gossip-based approach. They make different tradeoffs between estimate/aggregate accuracy and failure resilience. To address the issue of load imbalance, three main solutions are described: virtual server-based approach, power of two choices, and address-space and item balancing. While different in their designs, they all aim to improve balance on the address space and data item distribution. As a case study, the chapter discusses a virtual server-based load balancing algorithm that strives to ensure fair load distribution among nodes and minimize load balancing cost in bandwidth. Finally, the chapter concludes with future research and a summary.
Dynamic Load Balancing for Computational Plasticity on Parallel Computers
NASA Technical Reports Server (NTRS)
Pramono, Eddy; Simon, Horst
1994-01-01
The simulation of the computational plasticity on a complex structure remains a formidable computational task, especially when a highly nonlinear, complex material model was used. It appears that the computational requirements for a such problem can only be satisfied by massively parallel architectures. In order to effectively harness the tremendous computational power provided by such architectures, it is imperative to investigate and to study the algorithmic and implementation issues pertaining to dynamic load balancing for computational plasticity on a highly parallel, distributed-memory, multiple-instruction, multiple-data computers. This paper will measure the effectiveness of the algorithms developed in handling the dynamic load balancing.
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Balancing loads. 23.421 Section 23.421 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Horizontal Stabilizing and Balancing Surfaces § 23.421...
NASA Technical Reports Server (NTRS)
Hailperin, M.
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.
An Evaluation of the HVAC Load Potential for Providing Load Balancing Service
Lu, Ning
2012-09-30
This paper investigates the potential of providing aggregated intra-hour load balancing services using heating, ventilating, and air-conditioning (HVAC) systems. A direct-load control algorithm is presented. A temperature-priority-list method is used to dispatch the HVAC loads optimally to maintain consumer-desired indoor temperatures and load diversity. Realistic intra-hour load balancing signals were used to evaluate the operational characteristics of the HVAC load under different outdoor temperature profiles and different indoor temperature settings. The number of HVAC units needed is also investigated. Modeling results suggest that the number of HVACs needed to provide a {+-}1-MW load balancing service 24 hours a day varies significantly with baseline settings, high and low temperature settings, and the outdoor temperatures. The results demonstrate that the intra-hour load balancing service provided by HVAC loads meet the performance requirements and can become a major source of revenue for load-serving entities where the smart grid infrastructure enables direct load control over the HAVC loads.
A comparative analysis of static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf; Bokhari, Shahid H.; Saltz, Joel H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but suboptimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the three strategies.
Performance tradeoffs in static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.
An analysis on the load balancing strategies in wavelength-routed optical networks
NASA Astrophysics Data System (ADS)
Liu, Kai; Fu, Minglei; Le, Zichun
2008-11-01
Routing and wavelength assignment (RWA) is one of the key issues in the wavelength-routed optical networks. Although some RWA algorithms have been well performed to meet the need of certain networks requirement, they usually neglect the performance of the whole networks, especially the load balancing of the whole networks. This is quite likely to lead to some links bearing excessive ligthpaths and traffic load, while other links being at an idle state. In this paper, the load distribution vector ( LDV ) is introduced to describe the links load of the networks firstly. Then by means of minimizing the LDV of the networks, the load balancing of the whole networks is tried to improve. Based on this, a heuristic load balancing (HLB) strategy is presented. Moreover, a novel RWA algorithm adopting the heuristic load balancing strategy is developed, as well as two other RWA algorithms adopting other load balancing strategies. At last, the simulations of the three RWA algorithms with different load balancing strategies are done for comparison on the basis of both the regular topology and the irregular topology networks. The simulation results show that the key performance parameters such as the average variance of links, the maximum link load and the number of established lightpath are improved by means of our novel RWA algorithm with the heuristic load balancing strategy.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrence Livermore National Laboratory. (authors)
Valiant load-balanced robust routing under hose model for WDM mesh networks
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng
2006-09-01
In this paper, we propose Valiant Load-Balanced robust routing scheme for WDM mesh networks under the model of polyhedral uncertainty (i.e., hose model), and the proposed routing scheme is implemented with traffic grooming approach. Our Objective is to maximize the hose model throughput. A mathematic formulation of Valiant Load-Balanced robust routing is presented and three fast heuristic algorithms are also proposed. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimizing hop first) is proposed. We compare the three heuristic algorithms with the VPN tree under the hose model. Finally we demonstrate in the simulation results that MHF with Valiant Load-Balanced robust routing scheme outperforms the traditional traffic-grooming algorithm in terms of the throughput for the uniform/non-uniform traffic matrix under the hose model.
Scalable load-balance measurement for SPMD codes
Gamblin, T; de Supinski, B R; Schulz, M; Fowler, R; Reed, D
2008-08-05
Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transform and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.
Parallel tetrahedral mesh adaptation with dynamic load balancing
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
2000-06-28
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.
Load balancing fictions, falsehoods and fallacies
HENDRICKSON,BRUCE A.
2000-05-30
Effective use of a parallel computer requires that a calculation be carefully divided among the processors. This load balancing problem appears in many guises and has been a fervent area of research for the past decade or more. Although great progress has been made, and useful software tools developed, a number of challenges remain. It is the conviction of the author that these challenges will be easier to address if programmers first come to terms with some significant shortcomings in their current perspectives. This paper tries to identify several areas in which the prevailing point of view is either mistaken or insufficient. The goal is to motivate new ideas and directions for this important field.
Performance Analysis and Portability of the PLUM Load Balancing System
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1998-01-01
The ability to dynamically adapt an unstructured mesh is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive numerical computations in a message-passing environment. PLUM requires that all data be globally redistributed after each mesh adaption to achieve load balance. We present an algorithm for minimizing this remapping overhead by guaranteeing an optimal processor reassignment. We also show that the data redistribution cost can be significantly reduced by applying our heuristic processor reassignment algorithm to the default mapping of the parallel partitioner. Portability is examined by comparing performance on a SP2, an Origin2000, and a T3E. Results show that PLUM can be successfully ported to different platforms without any code modifications.
Evaluating Zoltan for Static Load Balancing on BlueGene Architectures
Kumfert, G
2007-11-15
The purpose of this TechBase was to evaluate the Zoltan load-balancing library from Sandia National Laboratories as a possible replacement for ParMetis, which had been the load balancer of choice for nearly a decade but does not scale to the full 64,000 processors of BlueGene/L. This evaluation was successful in producing a clear result, but the result was unfortunately negative. Although Zoltan presents a collection load-balancing algorithms, none were able to meet or exceed the combined scalability and quality of ParMetis on representative datasets.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
NASA Astrophysics Data System (ADS)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.
Parallel Processing of Adaptive Meshes with Load Balancing
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.
Exploiting Flexibly Assignable Work to Improve Load Balance
Pinar, Ali; Hendrickson, Bruce
2002-12-09
In many applications of parallel computing, distribution of the data unambiguously implies distribution of work among processors. But there are exceptions where some tasks can be assigned to one of several processors without altering the total volume of communication. In this paper, we study the problem of exploiting this flexibility in assignment of tasks to improve load balance. We first model the problem in terms of network flow and use combinatorial techniques for its solution. Our parametric search algorithms use maximum flow algorithms for probing on a candidate optimal solution value. We describe two algorithms to solve the assignment problem with log W{sub T} and |P| probe calls, where W{sub T} and |P|, respectively, denote the total workload and number of processors. We also define augmenting paths and cuts for this problem, and show that any algorithm based on augmenting paths can be used to find an optimal solution for the task assignment problem. We then consider a continuous version of the problem, and formulate it as a linearly constrained optimization problem, i.e., min ||Ax||{sub {infinity}}, s.t. Bx = d. To avoid solving an intractable {infinity}-norm optimization problem, we show that in this case minimizing the 2-norm is sufficient to minimize the {infinity}-norm, which reduces the problem to the well-studied linearly-constrained least squares problem. The continuous version of the problem has the advantage of being easily amenable to parallelization.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at
Neural Network Algorithm for Particle Loading
J. L. V. Lewandowski
2003-04-25
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Dynamic Load Balancing for Adaptive Meshes using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often dynamic in the sense that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing inter-processor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view across processors. In this paper, we compare a novel load balancer that utilizes symmetric broadcast networks (SBN) to a successful global load balancing environment (PLUM) created to handle adaptive unstructured applications. Our experimental results on the IBM SP2 demonstrate that performance of the proposed SBN load balancer is comparable to results achieved under PLUM.
High-Performance Kinetic Plasma Simulations with GPUs and load balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Ahmadi, Narges; Abbott, Stephen; Lin, Liwei; Wang, Liang; Bhattacharjee, Amitava; Fox, Will
2014-10-01
We will describe the Plasma Simulation Code (PSC), a modern particle-in-cell code with GPU support and dynamic load balancing capabilities. For 2-d problems, we achieve a speed-up of up to 6 × on the Cray XK7 ``Titan'' using its GPUs over the well-known VPIC code, which has been optimized for conventional CPUs with SIMD support. Our load-balancing algorithm employs a space-filling Hilbert-Peano curve to maintain locality and has shown to keep the load balanced within approximately 10% in production runs which otherwise slow down up to 5 × with only static load balancing. PSC is based on the
Graph-balancing algorithms for average consensus over directed networks
NASA Astrophysics Data System (ADS)
Fan, Yuan; Han, Runzhe; Qiu, Jianbin
2016-01-01
Consensus strategies find extensive applications in coordination of robot groups and decision-making of agents. Since balanced graph plays an important role in the average consensus problem and many other coordination problems for directed communication networks, this work explores the conditions and algorithms for the digraph balancing problem. Based on the analysis of graph cycles, we prove that a digraph can be balanced if and only if the null space of its incidence matrix contains positive vectors. Then, based on this result and the corresponding analysis, two weight balance algorithms have been proposed, and the conditions for obtaining a unique balanced solution and a set of analytical results on weight balance problems have been introduced. Then, we point out the relationship between the weight balance problem and the features of the corresponding underlying Markov chain. Finally, two numerical examples are presented to verify the proposed algorithms.
A complete algorithm for fixture loading
Yu, K.; Goldberg, K.Y.
1998-11-01
A fixture is a device for locating and holding parts. Since the initial position and orientation of a part may be uncertain, the act of loading the part into the fixture must compensate for this uncertainty. Machinists often refer to the 3-2-1 rule: place the part onto 3-point contact with a horizontal support plane, slide the part along this plane into 2-point contact with the fixture, then translate along this edge until a 1-point contact uniquely locates the part. This rule of thumb implicitly assumes both sensing and compliance: applied forces change as contacts are detected. In this paper, the authors geometrically formalize robotic fixture loading as a sensor-based compliant assembly problem and give a complete planning algorithm. They consider the class of modular fixtures that use three locators and one clamp (Brost and Goldberg 1996), and discuss a class of robot commands that cause the part to slide and rotate in the support plane. Sensing is achieved with binary contact sensors on each locator; compliance is achieved with a passive spring-loaded mechanism at the robot and effector. The authors extend the theory of sensor-based compliant motion planning to generalized polygonal C-spaces, and give a complete planning algorithm: it is guaranteed to find a loading plan when one exists and to return a negative report otherwise. The authors report on experiments using the resulting plans. Finally, they use this formalization to prove a sufficient condition for the 3-2-1 rule.
A High Performance Load Balance Strategy for Real-Time Multicore Systems
Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing
2014-01-01
Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
Implementation of GAMMON - An efficient load balancing strategy for a local computer system
NASA Technical Reports Server (NTRS)
Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.
1989-01-01
GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.
Balancing Loads Among Robotic-Manipulator Arms
NASA Technical Reports Server (NTRS)
Kreutz, Kenneth K.; Lokshin, Anatole
1990-01-01
Paper presents rigorous mathematical approach to control of multiple robot arms simultaneously grasping one object. Mathematical development focuses on relationship between ability to control degrees of freedom of configuration and ability to control forces within grasped object and robot arms. Understanding of relationship leads to practical control schemes distributing load more equitably among all arms while grasping object with proper nondamaging forces.
MDSLB: A new static load balancing method for parallel molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Wu, Yun-Long; Xu, Xin-Hai; Yang, Xue-Jun; Zou, Shun; Ren, Xiao-Guang
2014-02-01
Large-scale parallelization of molecular dynamics simulations is facing challenges which seriously affect the simulation efficiency, among which the load imbalance problem is the most critical. In this paper, we propose, a new molecular dynamics static load balancing method (MDSLB). By analyzing the characteristics of the short-range force of molecular dynamics programs running in parallel, we divide the short-range force into three kinds of force models, and then package the computations of each force model into many tiny computational units called “cell loads”, which provide the basic data structures for our load balancing method. In MDSLB, the spatial region is separated into sub-regions called “local domains”, and the cell loads of each local domain are allocated to every processor in turn. Compared with the dynamic load balancing method, MDSLB can guarantee load balance by executing the algorithm only once at program startup without migrating the loads dynamically. We implement MDSLB in OpenFOAM software and test it on TianHe-1A supercomputer with 16 to 512 processors. Experimental results show that MDSLB can save 34%-64% time for the load imbalanced cases.
PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.
Migration impact on load balancing - an experience on Amoeba
Zhu, W.; Socko, P.
1996-12-31
Load balancing has been extensive study by simulation, positive results were received in most of the researches. With the increase of the availability oftlistributed systems, a few experiments have been carried out on different systems. These experimental studies either depend on task initiation or task initiation plus task migration. In this paper, we present the results of an 0 study of load balancing using a centralizedpolicy to manage the load on a set of processors, which was carried out on an Amoeba system which consists of a set of 386s and linked by 10 Mbps Ethernet. The results on one hand indicate the necessity of a load balancing facility for a distributed system. On the other hand, the results question the impact of using process migration to increase system performance under the configuration used in our experiments.
Incorporating Load Balancing Spatial Analysis Into Xml-Based Webgis
NASA Astrophysics Data System (ADS)
Huang, H.
2012-07-01
This article aims to introduce load balancing spatial analysis into XML-based WebGIS. In contrast to other approaches that implement spatial queries and analyses solely on server or browser sides, load balancing spatial analysis carries out spatial analysis on either the server or the browser sides depending on the execution costs (i.e., network transmission costs and computational costs). In this article, key elements of load balancing middlewares are investigated, and relevant solution is proposed. The comparison with server-side solution, browse-side solution, and our former solution shows that the proposed solution can optimize the execution of spatial analysis, greatly ease the network transmission load between the server and the browser sides, and therefore lead to a better performance. The proposed solution enables users to access high-performance spatial analysis simply via a web browser.
Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines
NASA Technical Reports Server (NTRS)
1999-01-01
Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.
Load Balancing Unstructured Adaptive Grids for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1996-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.
BALANCING THE LOAD: A VORONOI BASED SCHEME FOR PARALLEL COMPUTATIONS
Steinberg, Elad; Yalinewich, Almog; Sari, Re'em; Duffell, Paul
2015-01-01
One of the key issues when running a simulation on multiple CPUs is maintaining a proper load balance throughout the run and minimizing communications between CPUs. We propose a novel method of utilizing a Voronoi diagram to achieve a nearly perfect load balance without the need of any global redistributions of data. As a show case, we implement our method in RICH, a two-dimensional moving mesh hydrodynamical code, but it can be extended trivially to other codes in two or three dimensions. Our tests show that this method is indeed efficient and can be used in a large variety of existing hydrodynamical codes.
Fuzzy Pool Balance: An algorithm to achieve a two dimensional balance in distribute storage systems
NASA Astrophysics Data System (ADS)
Wu, Wenjing; Chen, Gang
2014-06-01
The limitation of scheduling modules and the gradual addition of disk pools in distributed storage systems often result in imbalances among their disk pools in terms of both disk usage and file count. This can cause various problems to the storage system such as single point of failure, low system throughput and imbalanced resource utilization and system loads. An algorithm named Fuzzy Pool Balance (FPB) is proposed here to solve this problem. The input of FPB is the current file distribution among disk pools and the output is a file migration plan indicating what files are to be migrated to which pools. FPB uses an array to classify the files by their sizes. The file classification array is dynamically calculated with a defined threshold named Tmax that defines the allowed pool disk usage deviations. File classification is the basis of file migration. FPB also defines the Immigration Pool (IP) and Emigration Pool (EP) according to the pool disk usage and File Quantity Ratio (FQR) that indicates the percentage of each category of files in each disk pool, so files with higher FQR in an EP will be migrated to IP(s) with a lower FQR of this file category. To verify this algorithm, we implemented FPB on an ATLAS Tier2 dCache production system. The results show that FPB can achieve a very good balance in both free space and file counts, and adjusting the threshold value Tmax and the correction factor to the average FQR can achieve a tradeoff between free space and file count.
A novel load balancing method for hierarchical federation simulation system
NASA Astrophysics Data System (ADS)
Bin, Xiao; Xiao, Tian-yuan
2013-07-01
In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.
A location selection policy of live virtual machine migration for power saving and load balancing.
Zhao, Jia; Ding, Yan; Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24348165
A Location Selection Policy of Live Virtual Machine Migration for Power Saving and Load Balancing
Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24348165
On delay adjustment for dynamic load balancing in distributed virtual environments.
Deng, Yunhua; Lau, Rynson W H
2012-04-01
Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead. PMID:22402679
Evaluation of delay performance in valiant load-balancing network
NASA Astrophysics Data System (ADS)
Yu, Yingdi; Jin, Yaohui; Cheng, Hong; Gao, Yu; Sun, Weiqiang; Guo, Wei; Hu, Weisheng
2007-11-01
Network traffic grows in an unpredictable way, which forces network operators to over-provision their backbone network in order to meet the increasing demands. In the consideration of new users, applications and unexpected failures, the utilization is typically below 30% [1]. There are two methods aimed to solve this problem. The first one is to adjust link capacity with the variation of traffic. However in optical network, rapid signaling scheme and large buffer is required. The second method is to use the statistical multiplexing function of IP routers connected point-to-point by optical links to counteract the effect brought by traffic's variation [2]. But the routing mechanism would be much more complex, and introduce more overheads into backbone network. To exert the potential of network and reduce its overhead, the use of Valiant Load-balancing for backbone network has been proposed in order to enhance the utilization of the network and to simplify the routing process. Raising the network utilization and improving throughput would inevitably influence the end-to-end delay. However, the study on delays of Load-balancing is lack. In the work presented in this paper, we study the delay performance in Valiant Load-balancing network, and isolate the queuing delay for modeling and detail analysis. We design the architecture of a switch with the ability of load-balancing for our simulation and experiment, and analyze the relationship between switch architecture and delay performance.
Work Stealing and Persistence-based Load Balancers for Iterative Overdecomposed Applications
Lifflander, Jonathan; Krishnamoorthy, Sriram; Kale, Laxmikant
2012-06-18
Applications often involve iterative execution of identical or slowly evolving calculations. Such applications require good initial load balance coupled with efficient periodic rebalancing. In this paper, we consider the design and evaluation of two distinct approaches to addressing this challenge: persistence-based load balancing and work stealing. The work to be performed is overdecomposed into tasks, enabling automatic rebalancing by the middleware. We present a hierarchical persistence-based rebalancing algorithm that performs localized incremental rebalancing. We also present an active-message-based retentive work stealing algorithm optimized for iterative applications on distributed memory machines. These are shown to incur low overheads and achieve over 90% efficiency on 76,800 cores.
Computational evaluation of load carriage effects on gait balance stability.
Mummolo, Carlotta; Park, Sukyung; Mangialardi, Luigi; Kim, Joo H
2016-08-01
Evaluating the effects of load carriage on gait balance stability is important in various applications. However, their quantification has not been rigorously addressed in the current literature, partially due to the lack of relevant computational indices. The novel Dynamic Gait Measure (DGM) characterizes gait balance stability by quantifying the relative effects of inertia in terms of zero-moment point, ground projection of center of mass, and time-varying foot support region. In this study, the DGM is formulated in terms of the gait parameters that explicitly reflect the gait strategy of a given walking pattern and is used for computational evaluation of the distinct balance stability of loaded walking. The observed gait adaptations caused by load carriage (decreased single support duration, inertia effects, and step length) result in decreased DGM values (p < 0.0001), which indicate that loaded walking motions are more statically stable compared with the unloaded normal walking. Comparison of the DGM with other common gait stability indices (the maximum Floquet multiplier and the margin of stability) validates the unique characterization capability of the DGM, which is consistently informative of the presence of the added load. PMID:26691823
Development of Load Balancing Systems in a Parallel MRP System
NASA Astrophysics Data System (ADS)
Tsukishima, Takahiro; Sato, Masahiro; Onari, Hisashi
The application of parallel computing system to MRP (Material Requirements Planning) is essential to achieve a real-time demand forecasting for a whole Supply Chain which consists of Multiple enterprises near future. The MRP using loosely connected multi-computer system is examined here. New methods of synchronization, load balancing and data access are required to keep high parallel efficiency increasing PE’s(Processing Elements). In this paper load balancing and data access methods are proposed. The prototype system can keep 96% parallel efficiency for the MRP with 120, 000 items on the 6 PE’s structure and can be robust against unbalanced load. The processing speed increases in liner fashion.
Preference based load balancing as an outpatient appointment scheduling aid.
Premarathne, Uthpala Subodhani; Han, Fengling; Khalil, Ibrahim; Tari, Zahir
2013-01-01
Load balancing is a performance improvement aid in various applications of distributed systems. In this paper we propose a preference based load balancing strategy as a scheduling aid in an outpatient clinic of an online medical consultation system. The performance objectives are to maximizing throughout and minimizing waiting time. Patients will provide a standard set of preferences prior to scheduling an appointment. The preferences are rated on to a scale and each service request will have a respective preference score. The available doctors will also be classified into classes based on their clinical expertise and the nature of the past diagnosis and the types of patients consulted. The preference scores will then be mapped on to each class and the appointment will be scheduled. The proposed scheme was modeled as a queuing system in Matlab. Matlab SimEvents library modules were used for constructing the model. Performance was analysed based on the average waiting time and utilization. The results revealed that the preference based load balancing scheme markedly reduce the waiting time and significantly improve the utilization under different load conditions. PMID:24109933
Dynamic Load Balancing on Single- and Multi-GPU Systems
Chen, Long; Villa, Oreste; Krishnamoorthy, Sriram; Gao, Guang R.
2010-04-19
The computational power provided by many-core graphics processing units (GPUs) has been exploited in many applications. The programming techniques supported and employed on these GPUs are not sufficient to address problems exhibiting irregular, and unbalanced workload. The problem is exacerbated when trying to effectively exploit multiple GPUs, which are commonly available in many modern systems. In this paper, we propose a task-based dynamic load-balancing solution for single- and multi-GPU systems. The solution allows load balancing at a finer granularity than what is supported in existing APIs such as NVIDIA’s CUDA. We evaluate our approach using both micro-benchmarks and a molecular dynamics application that exhibits significant load imbalance. Experimental results with a single-GPU configuration show that our fine-grained task solution can utilize the hardware more efficiently than the CUDA scheduler for unbalanced workload. On multi-GPU systems, our solution achieves near-linear speedup, load balance, and significant performance improvement over techniques based on standard CUDA APIs.
Towards a Load Balancing Middleware for Automotive Infotainment Systems
NASA Astrophysics Data System (ADS)
Khaluf, Yara; Rettberg, Achim
In this paper a middleware for distributed automotive systems is developed. The goal of this middleware is to support the load bal- ancing and service optimization in automotive infotainment and entertainment systems. These systems provide navigation, telecommunication, Internet, audio/video and many other services where a kind of dynamic load balancing mechanisms in addition to service quality optimization mechanisms will be applied by the developed middleware in order to improve the system performance and also at the same time improve the quality of services if possible.
Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1997-01-01
Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.
Monitoring dynamic loads on wind tunnel force balances
NASA Technical Reports Server (NTRS)
Ferris, Alice T.; White, William C.
1989-01-01
Two devices have been developed at NASA Langley to monitor the dynamic loads incurred during wind-tunnel testing. The Balance Dynamic Display Unit (BDDU), displays and monitors the combined static and dynamic forces and moments in the orthogonal axes. The Balance Critical Point Analyzer scales and sums each normalized signal from the BDDU to obtain combined dynamic and static signals that represent the dynamic loads at predefined high-stress points. The display of each instrument is a multiplex of six analog signals in a way that each channel is displayed sequentially as one-sixth of the horizontal axis on a single oscilloscope trace. Thus this display format permits the operator to quickly and easily monitor the combined static and dynamic level of up to six channels at the same time.
Data Partitioning and Load Balancing in Parallel Disk Systems
NASA Technical Reports Server (NTRS)
Scheuermann, Peter; Weikum, Gerhard; Zabback, Peter
1997-01-01
Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible waves, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces.
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
Economic load dispatch using improved gravitational search algorithm
NASA Astrophysics Data System (ADS)
Huang, Yu; Wang, Jia-rong; Guo, Feng
2016-03-01
This paper presents an improved gravitational search algorithm(IGSA) to solve the economic load dispatch(ELD) problem. In order to avoid the local optimum phenomenon, mutation processing is applied to the GSA. The IGSA is applied to solve the economic load dispatch problems with the valve point effects, which has 13 generators and a load demand of 2520 MW. Calculation results show that the algorithm in this paper can deal with the ELD problems with high stability.
Dual strain gage balance system for measuring light loads
NASA Technical Reports Server (NTRS)
Roberts, Paul W. (Inventor)
1991-01-01
A dual strain gage balance system for measuring normal and axial forces and pitching moment of a metric airfoil model imparted by aerodynamic loads applied to the airfoil model during wind tunnel testing includes a pair of non-metric panels being rigidly connected to and extending towards each other from opposite sides of the wind tunnel, and a pair of strain gage balances, each connected to one of the non-metric panels and to one of the opposite ends of the metric airfoil model for mounting the metric airfoil model between the pair of non-metric panels. Each strain gage balance has a first measuring section for mounting a first strain gage bridge for measuring normal force and pitching moment and a second measuring section for mounting a second strain gage bridge for measuring axial force.
Selective randomized load balancing and mesh networks with changing demands
NASA Astrophysics Data System (ADS)
Shepherd, F. B.; Winzer, P. J.
2006-05-01
We consider the problem of building cost-effective networks that are robust to dynamic changes in demand patterns. We compare several architectures using demand-oblivious routing strategies. Traditional approaches include single-hop architectures based on a (static or dynamic) circuit-switched core infrastructure and multihop (packet-switched) architectures based on point-to-point circuits in the core. To address demand uncertainty, we seek minimum cost networks that can carry the class of hose demand matrices. Apart from shortest-path routing, Valiant's randomized load balancing (RLB), and virtual private network (VPN) tree routing, we propose a third, highly attractive approach: selective randomized load balancing (SRLB). This is a blend of dual-hop hub routing and randomized load balancing that combines the advantages of both architectures in terms of network cost, delay, and delay jitter. In particular, we give empirical analyses for the cost (in terms of transport and switching equipment) for the discussed architectures, based on three representative carrier networks. Of these three networks, SRLB maintains the resilience properties of RLB while achieving significant cost reduction over all other architectures, including RLB and multihop Internet protocol/multiprotocol label switching (IP/MPLS) networks using VPN-tree routing.
An efficient algorithm using matrix methods to solve wind tunnel force-balance equations
NASA Technical Reports Server (NTRS)
Smith, D. L.
1972-01-01
An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.
Load Balancing at Emergency Departments using ‘Crowdinforming’
Friesen, Marcia R; Strome, Trevor; Mukhi, Shamir; McLoed, Robert
2011-01-01
Background: Emergency Department (ED) overcrowding is an important healthcare issue facing increasing public and regulatory scrutiny in Canada and around the world. Many approaches to alleviate excessive waiting times and lengths of stay have been studied. In theory, optimal ED patient flow may be assisted via balancing patient loads between EDs (in essence spreading patients more evenly throughout this system). This investigation utilizes simulation to explore “Crowdinforming” as a basis for a process control strategy aimed to balance patient loads between six EDs within a mid-sized Canadian city. Methods: Anonymous patient visit data comprising 120,000 ED patient visits over six months to six ED facilities were obtained from the region’s Emergency Department Information System (EDIS) to (1) determine trends in ED visits and interactions between parameters; (2) to develop a process control strategy integrating crowdinforming; and, (3) apply and evaluate the model in a simulated environment to explore the potential impact on patient self-redirection and load balancing between EDs. Results: As in reality, the data available and subsequent model demonstrated that there are many factors that impact ED patient flow. Initial results suggest that for this particular data set used, ED arrival rates were the most useful metric for ED ‘busyness’ in a process control strategy, and that Emergency Department performance may benefit from load balancing efforts. Conclusions: The simulation supports the use of crowdinforming as a potential tool when used in a process control strategy to balance the patient loads between EDs. The work also revealed that the value of several parameters intuitively expected to be meaningful metrics of ED ‘busyness’ was not evident, highlighting the importance of finding parameters meaningful within one’s particular data set. The information provided in the crowdinforming model is already available in a local context at some ED sites
Population-based learning of load balancing policies for a distributed computer system
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; Wah, Benjamin W.
1993-01-01
Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.
Priority-rotating DBA with adaptive load balance for reconfigurable WDM/TDM PON
NASA Astrophysics Data System (ADS)
Xia, Weidong; Gan, Chaoqin; Xie, Weilun; Ni, Cuiping
2015-12-01
To the wavelength-division multiplexing/time-division multiplexing passive optical network (WDM/TDM PON) architecture that implements wavelength sharing and traffic redirection, a priority-rotating dynamic bandwidth allocation (DBA) algorithm is proposed in this paper. The priority of each ONU is set and rotated to meet the bandwidth demand and guarantee the fairness among optical network units (ONUs). The bandwidth allocation for priority queues is employed to avoid bandwidth monopolization and over-allocation. The bandwidth allocation for high-loaded situation and redirected traffic are discussed to achieve adaptive load balance over wavelengths and among ONUs. The simulation results show a good performance of the proposed algorithm in throughput rate and average packet delay.
NASA Technical Reports Server (NTRS)
Richardson, J.; Labbe, M.; Belala, Y.; Leduc, Vincent
1994-01-01
The requirement for improving aircraft utilization and responsiveness in airlift operations has been recognized for quite some time by the Canadian Forces. To date, the utilization of scarce airlift resources has been planned mainly through the employment of manpower-intensive manual methods in combination with the expertise of highly qualified personnel. In this paper, we address the problem of facilitating the load planning process for military aircraft cargo planes through the development of a computer-based system. We introduce TALBAS (Transport Aircraft Loading and BAlancing System), a knowledge-based system designed to assist personnel involved in preparing valid load plans for the C130 Hercules aircraft. The main features of this system which are accessible through a convivial graphical user interface, consists of the automatic generation of valid cargo arrangements given a list of items to be transported, the user-definition of load plans and the automatic validation of such load plans.
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Sherstnyova, A. I.; Botygin, I. A.; Galanova, N. Y.
2016-06-01
The results of the statistical model experiments research of various load balancing algorithms in distributed computing systems are presented. Software tools were developed. These tools, which allow to create a virtual infrastructure of distributed computing system in accordance with the intended objective of the research focused on multi-agent and multithreaded data processing were developed. A diagram of the control processing of requests from the terminal devices, providing an effective dynamic horizontal scaling of computing power at peak loads, is proposed.
Adaptive dynamic load-balancing with irregular domain decomposition for particle simulations
NASA Astrophysics Data System (ADS)
Begau, Christoph; Sutmann, Godehard
2015-05-01
We present a flexible and fully adaptive dynamic load-balancing scheme, which is designed for particle simulations of three-dimensional systems with short ranged interactions. The method is based on domain decomposition with non-orthogonal non-convex domains, which are constructed based on a local repartitioning of computational work between neighbouring processors. Domains are dynamically adjusted in a flexible way under the condition that the original topology is not changed, i.e. neighbour relations between domains are retained, which guarantees a fixed communication pattern for each domain during a simulation. Extensions of this scheme are discussed and illustrated with examples, which generalise the communication patterns and do not fully restrict data exchange to direct neighbours. The proposed method relies on a linked cell algorithm, which makes it compatible with existing implementations in particle codes and does not modify the underlying algorithm for calculating the forces between particles. The method has been implemented into the molecular dynamics community code IMD and performance has been measured for various molecular dynamics simulations of systems representing realistic problems from materials science. It is found that the method proves to balance the work between processors in simulations with strongly inhomogeneous and dynamically changing particle distributions, which results in a significant increase of the efficiency of the parallel code compared both to unbalanced simulations and conventional load-balancing strategies.
Strain gage selection in loads equations using a genetic algorithm
NASA Technical Reports Server (NTRS)
1994-01-01
Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.
Estimating nutrient loadings using chemical mass balance approach.
Jain, C K; Singhal, D C; Sharma, M K
2007-11-01
The river Hindon is one of the important tributaries of river Yamuna in western Uttar Pradesh (India) and carries pollution loads from various municipal and industrial units and surrounding agricultural areas. The main sources of pollution in the river include municipal wastes from Saharanpur, Muzaffarnagar and Ghaziabad urban areas and industrial effluents of sugar, pulp and paper, distilleries and other miscellaneous industries through tributaries as well as direct inputs. In this paper, chemical mass balance approach has been used to assess the contribution from non-point sources of pollution to the river. The river system has been divided into three stretches depending on the land use pattern. The contribution of point sources in the upper and lower stretches are 95 and 81% respectively of the total flow of the river while there is no point source input in the middle stretch. Mass balance calculations indicate that contribution of nitrate and phosphate from non-point sources amounts to 15.5 and 6.9% in the upper stretch and 13.1 and 16.6% in the lower stretch respectively. Observed differences in the load along the river may be attributed to uncharacterized sources of pollution due to agricultural activities, remobilization from or entrainment of contaminated bottom sediments, ground water contribution or a combination of these sources. PMID:17616829
Load Balancing Scheme on the Basis of Huffman Coding for P2P Information Retrieval
NASA Astrophysics Data System (ADS)
Kurasawa, Hisashi; Takasu, Atsuhiro; Adachi, Jun
Although a distributed index on a distributed hash table (DHT) enables efficient document query processing in Peer-to-Peer information retrieval (P2P IR), the index costs a lot to construct and it tends to be an unfair management because of the unbalanced term frequency distribution. We devised a new distributed index, named Huffman-DHT, for P2P IR. The new index uses an algorithm similar to Huffman coding with a modification to the DHT structure based on the term distribution. In a Huffman-DHT, a frequent term is assigned to a short ID and allocated a large space in the node ID space in DHT. Throuth ID management, the Huffman-DHT balances the index registration accesses among peers and reduces load concentrations. Huffman-DHT is the first approach to adapt concepts of coding theory and term frequency distribution to load balancing. We evaluated this approach in experiments using a document collection and assessed its load balancing capabilities in P2P IR. The experimental results indicated that it is most effective when the P2P system consists of about 30, 000 nodes and contains many documents. Moreover, we proved that we can construct a Huffman-DHT easily by estimating the probability distribution of the term occurrence from a small number of sample documents.
A Hybrid Ant Colony Algorithm for Loading Pattern Optimization
NASA Astrophysics Data System (ADS)
Hoareau, F.
2014-06-01
Electricité de France (EDF) operates 58 nuclear power plant (NPP), of the Pressurized Water Reactor (PWR) type. The loading pattern (LP) optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R&D has developed automatic optimization tools that assist the experts. The latter can resort, for instance, to a loading pattern optimization software based on ant colony algorithm. This paper presents an analysis of the search space of a few realistic loading pattern optimization problems. This analysis leads us to introduce a hybrid algorithm based on ant colony and a local search method. We then show that this new algorithm is able to generate loading patterns of good quality.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1990-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
A Baseline Load Schedule for the Manual Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.
2013-01-01
A baseline load schedule for the manual calibration of a force balance was developed that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, the aft gage location, and the balance moment center; (iv) the balance should be used in UP and DOWN orientation to get axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. Three different approaches are also reviewed that may be used to independently estimate the natural zeros of the balance. These three approaches provide gage output differences that may be used to estimate the weight of both the metric and non-metric part of the balance. Manual calibration data of NASA s MK29A balance and machine calibration data of NASA s MC60D balance are used to illustrate and evaluate different aspects of the proposed baseline load schedule design.
A Baseline Load Schedule for the Manual Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.
2013-01-01
A baseline load schedule for the manual calibration of a force balance is defined that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The chosen load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, aft gage location, and the balance moment center; (iv) the balance should be used in "up" and "down" orientation to get positive and negative axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. In addition, three different approaches are discussed in the paper that may be used to independently estimate the natural zeros, i.e., the gage outputs of the absolute load datum of the balance. These three approaches provide gage output differences that can be used to estimate the weight of both the metric and non-metric part of the balance. Data from the calibration of a six-component force balance will be used in the final manuscript of the paper to illustrate characteristics of the proposed baseline load schedule.
Carmichael, H.
1953-01-01
A torsional-type analytical balance designed to arrive at its equilibrium point more quickly than previous balances is described. In order to prevent external heat sources creating air currents inside the balance casing that would reiard the attainment of equilibrium conditions, a relatively thick casing shaped as an inverted U is placed over the load support arms and the balance beam. This casing is of a metal of good thernnal conductivity characteristics, such as copper or aluminum, in order that heat applied to one portion of the balance is quickly conducted to all other sensitive areas, thus effectively preventing the fornnation of air currents caused by unequal heating of the balance.
Dynamic load balancing in a concurrent plasma PIC code on the JPL/Caltech Mark III hypercube
Liewer, P.C.; Leaver, E.W.; Decyk, V.K.; Dawson, J.M.
1990-12-31
Dynamic load balancing has been implemented in a concurrent one-dimensional electromagnetic plasma particle-in-cell (PIC) simulation code using a method which adds very little overhead to the parallel code. In PIC codes, the orbits of many interacting plasma electrons and ions are followed as an initial value problem as the particles move in electromagnetic fields calculated self-consistently from the particle motions. The code was implemented using the GCPIC algorithm in which the particles are divided among processors by partitioning the spatial domain of the simulation. The problem is load-balanced by partitioning the spatial domain so that each partition has approximately the same number of particles. During the simulation, the partitions are dynamically recreated as the spatial distribution of the particles changes in order to maintain processor load balance.
MCNP load balancing and fault tolerance with PVM
McKinney, G.W.
1995-07-01
Version 4A of the Monte Carlo neutron, photon, and electron transport code MCNP, developed by LANL (Los Alamos National Laboratory), supports distributed-memory multiprocessing through the software package PVM (Parallel Virtual Machine, version 3.1.4). Using PVM for interprocessor communication, MCNP can simultaneously execute a single problem on a cluster of UNIX-based workstations. This capability provided system efficiencies that exceeded 80% on dedicated workstation clusters, however, on heterogeneous or multiuser systems, the performance was limited by the slowest processor (i.e., equal work was assigned to each processor). The next public release of MCNP will provide multiprocessing enhancements that include load balancing and fault tolerance which are shown to dramatically increase multiuser system efficiency and reliability.
Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms
Hu, Zhongyi; Xiong, Tao
2013-01-01
Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425
Some important observations on fast decoupled load flow algorithm
Nanda, J.; Kothari, D.P.; Srivastava, S.C.
1987-05-01
This letter brings out clearly for the first time the relative importance and weightage of some of the assumptions made by B. Scott and O. Alsac in their fast decoupled load flow (FDLF) algorithm on its convergence property. Results have been obtained for two sample IEEE test systems. The conclusions of this work are envisaged to be of immense practical relevance while developing a fast decoupled load flow program.
A single-stage optical load-balanced switch for data centers.
Huang, Qirui; Yeo, Yong-Kee; Zhou, Luying
2012-10-22
Load balancing is an attractive technique to achieve maximum throughput and optimal resource utilization in large-scale switching systems. However current electronic load-balanced switches suffer from severe problems in implementation cost, power consumption and scaling. To overcome these problems, in this paper we propose a single-stage optical load-balanced switch architecture based on an arrayed waveguide grating router (AWGR) in conjunction with fast tunable lasers. By reuse of the fast tunable lasers, the switch achieves both functions of load balancing and switching through the AWGR. With this architecture, proof-of-concept experiments have been conducted to investigate the feasibility of the optical load-balanced switch and to examine its physical performance. Compared to three-stage load-balanced switches, the reported switch needs only half of optical devices such as tunable lasers and AWGRs, which can provide a cost-effective solution for future data centers. PMID:23187266
Combined Load Diagram for a Wind Tunnel Strain-Gage Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
Combined Load Diagrams for Direct-Read, Force, and Moment Balances are discussed in great detail in the paper. The diagrams, if compared with a corresponding combined load plot of a balance calibration data set, may be used to visualize and interpret basic relationships between the applied balance calibration loads and the load components at the forward and aft gage of a strain-age balance. Lines of constant total force and moment are identified in the diagrams. In addition, the lines of pure force and pure moment are highlighted. Finally, lines of constant moment arm are depicted. It is also demonstrated that each quadrant of a Combined Load Diagram has specific regions where the applied total calibration force is at, between, or outside of the balance gage locations. Data from the manual calibration of a Force Balance is used to illustrate the application of a Combined Load Diagram to a realistic data set.
The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava
2016-08-01
This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.
Dynamic load balancing of matrix-vector multiplications on roadrunner compute nodes
Sancho Pitarch, Jose Carlos
2009-01-01
Hybrid architectures that combine general purpose processors with accelerators are being adopted in several large-scale systems such as the petaflop Roadrunner supercomputer at Los Alamos. In this system, dual-core Opteron host processors are tightly coupled with PowerXCell 8i processors within each compute node. In this kind of hybrid architecture, an accelerated mode of operation is typically used to offload performance hotspots in the computation to the accelerators. In this paper we explore the suitability of a variant of this acceleration mode in which the performance hotspots are actually shared between the host and the accelerators. To achieve this we have designed a new load balancing algorithm, which is optimized for the Roadrunner compute nodes, to dynamically distribute computation and associated data between the host and the accelerators at runtime. Results are presented using this approach for sparse and dense matrix-vector multiplications that show load-balancing can improve performance by up to 24% over solely using the accelerators.
Algorithms for Determining Physical Responses of Structures Under Load
NASA Technical Reports Server (NTRS)
Richards, W. Lance; Ko, William L.
2012-01-01
Ultra-efficient real-time structural monitoring algorithms have been developed to provide extensive information about the physical response of structures under load. These algorithms are driven by actual strain data to measure accurately local strains at multiple locations on the surface of a structure. Through a single point load calibration test, these structural strains are then used to calculate key physical properties of the structure at each measurement location. Such properties include the structure s flexural rigidity (the product of the structure's modulus of elasticity, and its moment of inertia) and the section modulus (the moment of inertia divided by the structure s half-depth). The resulting structural properties at each location can be used to determine the structure s bending moment, shear, and structural loads in real time while the structure is in service. The amount of structural information can be maximized through the use of highly multiplexed fiber Bragg grating technology using optical time domain reflectometry and optical frequency domain reflectometry, which can provide a local strain measurement every 10 mm on a single hair-sized optical fiber. Since local strain is used as input to the algorithms, this system serves multiple purposes of measuring strains and displacements, as well as determining structural bending moment, shear, and loads for assessing real-time structural health. The first step is to install a series of strain sensors on the structure s surface in such a way as to measure bending strains at desired locations. The next step is to perform a simple ground test calibration. For a beam of length l (see example), discretized into n sections and subjected to a tip load of P that places the beam in bending, the flexural rigidity of the beam can be experimentally determined at each measurement location x. The bending moment at each station can then be determined for any general set of loads applied during operation.
Genetic Algorithm based Decentralized PI Type Controller: Load Frequency Control
NASA Astrophysics Data System (ADS)
Dwivedi, Atul; Ray, Goshaidas; Sharma, Arun Kumar
2016-05-01
This work presents a design of decentralized PI type Linear Quadratic (LQ) controller based on genetic algorithm (GA). The proposed design technique allows considerable flexibility in defining the control objectives and it does not consider any knowledge of the system matrices and moreover it avoids the solution of algebraic Riccati equation. To illustrate the results of this work, a load-frequency control problem is considered. Simulation results reveal that the proposed scheme based on GA is an alternative and attractive approach to solve load-frequency control problem from both performance and design point of views.
Tseng, Chinyang Henry
2016-01-01
In wireless networks, low-power Zigbee is an excellent network solution for wireless medical monitoring systems. Medical monitoring generally involves transmission of a large amount of data and easily causes bottleneck problems. Although Zigbee's AODV mesh routing provides extensible multi-hop data transmission to extend network coverage, it originally does not, and needs to support some form of load balancing mechanism to avoid bottlenecks. To guarantee a more reliable multi-hop data transmission for life-critical medical applications, we have developed a multipath solution, called Load-Balanced Multipath Routing (LBMR) to replace Zigbee's routing mechanism. LBMR consists of three main parts: Layer Routing Construction (LRC), a Load Estimation Algorithm (LEA), and a Route Maintenance (RM) mechanism. LRC assigns nodes into different layers based on the node's distance to the medical data gateway. Nodes can have multiple next-hops delivering medical data toward the gateway. All neighboring layer-nodes exchange flow information containing current load, which is the used by the LEA to estimate future load of next-hops to the gateway. With LBMR, nodes can choose the neighbors with the least load as the next-hops and thus can achieve load balancing and avoid bottlenecks. Furthermore, RM can detect route failures in real-time and perform route redirection to ensure routing robustness. Since LRC and LEA prevent bottlenecks while RM ensures routing fault tolerance, LBMR provides a highly reliable routing service for medical monitoring. To evaluate these accomplishments, we compare LBMR with Zigbee's AODV and another multipath protocol, AOMDV. The simulation results demonstrate LBMR achieves better load balancing, less unreachable nodes, and better packet delivery ratio than either AODV or AOMDV. PMID:27258297
Tseng, Chinyang Henry
2016-01-01
In wireless networks, low-power Zigbee is an excellent network solution for wireless medical monitoring systems. Medical monitoring generally involves transmission of a large amount of data and easily causes bottleneck problems. Although Zigbee’s AODV mesh routing provides extensible multi-hop data transmission to extend network coverage, it originally does not, and needs to support some form of load balancing mechanism to avoid bottlenecks. To guarantee a more reliable multi-hop data transmission for life-critical medical applications, we have developed a multipath solution, called Load-Balanced Multipath Routing (LBMR) to replace Zigbee’s routing mechanism. LBMR consists of three main parts: Layer Routing Construction (LRC), a Load Estimation Algorithm (LEA), and a Route Maintenance (RM) mechanism. LRC assigns nodes into different layers based on the node’s distance to the medical data gateway. Nodes can have multiple next-hops delivering medical data toward the gateway. All neighboring layer-nodes exchange flow information containing current load, which is the used by the LEA to estimate future load of next-hops to the gateway. With LBMR, nodes can choose the neighbors with the least load as the next-hops and thus can achieve load balancing and avoid bottlenecks. Furthermore, RM can detect route failures in real-time and perform route redirection to ensure routing robustness. Since LRC and LEA prevent bottlenecks while RM ensures routing fault tolerance, LBMR provides a highly reliable routing service for medical monitoring. To evaluate these accomplishments, we compare LBMR with Zigbee’s AODV and another multipath protocol, AOMDV. The simulation results demonstrate LBMR achieves better load balancing, less unreachable nodes, and better packet delivery ratio than either AODV or AOMDV. PMID:27258297
Experience with automatic, dynamic load balancing and adaptive finite element computation
Wheat, S.R.; Devine, K.D.; Maccabe, A.B.
1993-10-01
Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.
Comparison of Iterative and Non-Iterative Strain-Gage Balance Load Calculation Methods
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
The accuracy of iterative and non-iterative strain-gage balance load calculation methods was compared using data from the calibration of a force balance. Two iterative and one non-iterative method were investigated. In addition, transformations were applied to balance loads in order to process the calibration data in both direct read and force balance format. NASA's regression model optimization tool BALFIT was used to generate optimized regression models of the calibration data for each of the three load calculation methods. This approach made sure that the selected regression models met strict statistical quality requirements. The comparison of the standard deviation of the load residuals showed that the first iterative method may be applied to data in both the direct read and force balance format. The second iterative method, on the other hand, implicitly assumes that the primary gage sensitivities of all balance gages exist. Therefore, the second iterative method only works if the given balance data is processed in force balance format. The calibration data set was also processed using the non-iterative method. Standard deviations of the load residuals for the three load calculation methods were compared. Overall, the standard deviations show very good agreement. The load prediction accuracies of the three methods appear to be compatible as long as regression models used to analyze the calibration data meet strict statistical quality requirements. Recent improvements of the regression model optimization tool BALFIT are also discussed in the paper.
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Fast computing global structural balance in signed networks based on memetic algorithm
NASA Astrophysics Data System (ADS)
Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng
2014-12-01
Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.
Assessment of New Load Schedules for the Machine Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.; Kew, R.
2015-01-01
New load schedules for the machine calibration of a six-component force balance are currently being developed and evaluated at the NASA Ames Balance Calibration Laboratory. One of the proposed load schedules is discussed in the paper. It has a total of 2082 points that are distributed across 16 load series. Several criteria were applied to define the load schedule. It was decided, for example, to specify the calibration load set in force balance format as this approach greatly simplifies the definition of the lower and upper bounds of the load schedule. In addition, all loads are assumed to be applied in a calibration machine by using the one-factor-at-a-time approach. At first, all single-component loads are applied in six load series. Then, three two-component load series are applied. They consist of the load pairs (N1, N2), (S1, S2), and (RM, AF). Afterwards, four three-component load series are applied. They consist of the combinations (N1, N2, AF), (S1, S2, AF), (N1, N2, RM), and (S1, S2, RM). In the next step, one four-component load series is applied. It is the load combination (N1, N2, S1, S2). Finally, two five-component load series are applied. They are the load combination (N1, N2, S1, S2, AF) and (N1, N2, S1, S2, RM). The maximum difference between loads of two subsequent data points of the load schedule is limited to 33 % of capacity. This constraint helps avoid unwanted load "jumps" in the load schedule that can have a negative impact on the performance of a calibration machine. Only loadings of the single- and two-component load series are loaded to 100 % of capacity. This approach was selected because it keeps the total number of calibration points to a reasonable limit while still allowing for the application of some of the more complex load combinations. Data from two of NASA's force balances is used to illustrate important characteristics of the proposed 2082-point calibration load schedule.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Simon, Horst D.
1996-01-01
The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.
LAKE MICHIGAN MASS BALANCE: ATRAZINE MODELING AND LOADS
The Lake Michigan Mass Balance Study measured PCBs, mercury, trans-nonachlor, and atrazine in rivers, the atmosphere, sediments, lake water, and the food chain. A mathematical model will predict what effect reducing pollution will have on the lake, and its large fish (lake trout ...
A Load Balanced Domain Decomposition Method Using Wavelet Analysis
Jameson, L; Johnson, J; Hesthaven, J
2001-05-31
Wavelet Analysis provides an orthogonal basis set which is localized in both the physical space and the Fourier transform space. We present here a domain decomposition method that uses wavelet analysis to maintain roughly uniform error throughout the computation domain while keeping the computational work balanced in a parallel computing environment.
Balancing the Load: How to Engage Counselors in School Improvement
ERIC Educational Resources Information Center
Mallory, Barbara J.; Jackson, Mary H.
2007-01-01
Principals cannot lead the school improvement process alone. They must enlist the help of others in the school community. School counselors, whose role is often viewed as peripheral and isolated from teaching and learning, can help principals, teachers, students, and parents balance the duties and responsibilities involved in continuous student…
Dynamic load balancing data centric storage for wireless sensor networks.
Song, Seokil; Bok, Kyoungsoo; Kwak, Yun Sik; Goo, Bongeun; Kwak, Youngsik; Ko, Daesik
2010-01-01
In this paper, a new data centric storage that is dynamically adapted to the work load changes is proposed. The proposed data centric storage distributes the load of hot spot areas to neighboring sensor nodes by using a multilevel grid technique. The proposed method is also able to use existing routing protocols such as GPSR (Greedy Perimeter Stateless Routing) with small changes. Through simulation, the proposed method enhances the lifetime of sensor networks over one of the state-of-the-art data centric storages. We implement the proposed method based on an operating system for sensor networks, and evaluate the performance through running based on a simulation tool. PMID:22163472
Using Multithreading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Bailey, David H. (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes. The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the question phase of FE applications on triangular meshes, and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments on EARTH-SP2, an implementation of EARTH on the IBM SP2, with different load balancing strategies that are built into the runtime system.
Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations. PMID:25429460
Venugopal, S.; Naik, V.K.
1991-10-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
Effects of partitioning and scheduling sparse matrix factorization on communication and load balance
NASA Technical Reports Server (NTRS)
Venugopal, Sesh; Naik, Vijay K.
1991-01-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our
Thulasidasan, Sunil; Kasiviswanathan, Shiva; Eidenbenz, Stephan; Romero, Philip
2010-01-01
We re-examine the problem of load balancing in conservatively synchronized parallel, discrete-event simulations executed on high-performance computing clusters, focusing on simulations where computational and messaging load tend to be spatially clustered. Such domains are frequently characterized by the presence of geographic 'hot-spots' - regions that generate significantly more simulation events than others. Examples of such domains include simulation of urban regions, transportation networks and networks where interaction between entities is often constrained by physical proximity. Noting that in conservatively synchronized parallel simulations, the speed of execution of the simulation is determined by the slowest (i.e most heavily loaded) simulation process, we study different partitioning strategies in achieving equitable processor-load distribution in domains with spatially clustered load. In particular, we study the effectiveness of partitioning via spatial scattering to achieve optimal load balance. In this partitioning technique, nearby entities are explicitly assigned to different processors, thereby scattering the load across the cluster. This is motivated by two observations, namely, (i) since load is spatially clustered, spatial scattering should, intuitively, spread the load across the compute cluster, and (ii) in parallel simulations, equitable distribution of CPU load is a greater determinant of execution speed than message passing overhead. Through large-scale simulation experiments - both of abstracted and real simulation models - we observe that scatter partitioning, even with its greatly increased messaging overhead, significantly outperforms more conventional spatial partitioning techniques that seek to reduce messaging overhead. Further, even if hot-spots change over the course of the simulation, if the underlying feature of spatial clustering is retained, load continues to be balanced with spatial scattering leading us to the observation that
Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Coupling Algorithms for Calculating Sensitivities of Population Balances
Man, P. L. W.; Kraft, M.; Norris, J. R.
2008-09-01
We introduce a new class of stochastic algorithms for calculating parametric derivatives of the solution of the space-homogeneous Smoluchowski's coagulation equation. Currently, it is very difficult to produce low variance estimates of these derivatives in reasonable amounts of computational time through the use of stochastic methods. These new algorithms consider a central difference estimator of the parametric derivative which is calculated by evaluating the coagulation equation at two different parameter values simultaneously, and causing variance reduction by maximising the covariance between these. The two different coupling strategies ('Single' and 'Double') have been compared to the case when there is no coupling ('Independent'). Both coupling algorithms converge and the Double coupling is the most 'efficient' algorithm. For the numerical example chosen we obtain a factor of about 100 in efficiency in the best case (small system evolution time and small parameter perturbation)
Sivakumar, B.; Bhalaji, N.; Sivakumar, D.
2014-01-01
In mobile ad hoc networks connectivity is always an issue of concern. Due to dynamism in the behavior of mobile nodes, efficiency shall be achieved only with the assumption of good network infrastructure. Presence of critical links results in deterioration which should be detected in advance to retain the prevailing communication setup. This paper discusses a short survey on the specialized algorithms and protocols related to energy efficient load balancing for critical link detection in the recent literature. This paper also suggests a machine learning based hybrid power-aware approach for handling critical nodes via load balancing. PMID:24790546
The work/exchange model: A generalized approach to dynamic load balancing
Wikstrom, M.C.
1991-12-20
A crucial concern in software development is reducing program execution time. Parallel processing is often used to meet this goal. However, parallel processing efforts can lead to many pitfalls and problems. One such problem is to distribute the workload among processors in such a way that minimum execution time is obtained. The common approach is to use a load balancer to distribute equal or nearly equal quantities of workload on each processor. Unfortunately, this approach relies on a naive definition of load imbalance and often fails to achieve the desired goal. A more sophisticated definition should account for the affects of additional factors including communication delay costs, network contention, and architectural issues. Consideration of additional factors led us to the realization that optical load distribution does not always result from equal load distribution. In this dissertation, we tackle the difficult problem of defining load imbalance. This is accomplished through the development of a parallel program model called the Generalized Work/Exchange Model. Associated with the model are equations for a restricted set of deterministically balanced programs that characterize idle time, elapsed time, and potential speedup. With the aid of the model, several common myths about load imbalance are exposed. A useful application called a load balancer enhancer is also presented which is applicable to the more general, quasi-static load unbalanced program.
Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs
Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang
2015-01-01
Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n2), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k2n2) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652
STAR load balancing and tiered-storage infrastructure strategy for ultimate db access
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.; Betts, W.; Didenko, L.; Van Buren, G.
2011-12-01
In recent years, the STAR experiment's database demands have grown in accord not only with simple facility growth, but also with a growing physics program. In addition to the accumulated metadata from a decade of operations, refinements to detector calibrations force user analysis to access database information post data production. Users may access any year's data at any point in time, causing a near random access of the metadata queried, contrary to time-organized production cycles. Moreover, complex online event selection algorithms created a query scarcity ("sparsity") scenario for offline production further impacting performance. Fundamental changes in our hardware approach were hence necessary to improve query speed. Initial strategic improvements were focused on developing fault-tolerant, load-balanced access to a multi-slave infrastructure. Beyond that, we explored, tested and quantified the benefits of introducing a Tiered storage architecture composed of conventional drives, solid-state disks, and memory-resident databases as well as leveraging the use of smaller database services fitting in memory. The results of our extensive testing in real life usage are presented.
Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs.
Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang
2015-01-01
Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n (2)), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k (2) n (2)) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
Load-balancing techniques for a parallel electromagnetic particle-in-cell code
PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.
2000-01-01
QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.
Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic
Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
An evaluation of inside surface heat balance models for cooling load calculations
Liesen, R.J.; Pedersen, C.O.
1997-12-31
The heat balance method is a fundamental procedure that can be used for a specified control volume to describe building physics. With a better understanding of building physics and the cost-effectiveness of computers, these types of procedures are accessible to all practicing engineers. The heat balance method describes the processes using the three fundamental modes of heat transfer: conduction, convection, and radiation. The control volumes naturally divide the building processes into an outside balance, an inside balance, an air balance, and conduction through the building elements. This allows the building heat balance to be solved in a number of fundamental ways. This paper looks at the general formulation of the inside surface heat balance from the conduction through the building elements to the radiant exchange and convection to the air in the zone. Development of many radiant exchange models is shown; these models range from the exact solutions using uniform radiosity networks and exact view factors to mean radiant temperature (MRT) and area-weighted view factors. These radiant exchange models are directly compared to each other for a simple zone with varying aspect ratios. The radiant exchange models are then compared to determine their effect on the cooling load. Finally, other parameters that affect the inside surface heat balance are investigated to determine their sensitivity to the cooling load.
A massively parallel adaptive finite element method with dynamic load balancing
Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.
1993-05-01
We construct massively parallel, adaptive finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. We also present results using adaptive p-refinement to reduce the computational cost of the method. We describe tiling, a dynamic, element-based data migration system. Tiling dynamically maintains global load balance in the adaptive method by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. We demonstrate the effectiveness of the dynamic load balancing with adaptive p-refinement examples.
A model for resource-aware load balancing on heterogeneous clusters.
Devine, Karen Dragon; Flaherty, Joseph E.; Teresco, James Douglas; Gervasio Luis G.; Faik, Jamal
2005-05-01
We address the problem of partitioning and dynamic load balancing on clusters with heterogeneous hardware resources. We propose DRUM, a model that encapsulates hardware resources and their interconnection topology. DRUM provides monitoring facilities for dynamic evaluation of communication, memory, and processing capabilities. Heterogeneity is quantified by merging the information from the monitors to produce a scalar number called 'power.' This power allows DRUM to be used easily by existing load-balancing procedures such as those in the Zoltan Toolkit while placing minimal burden on application programmers. We demonstrate the use of DRUM to guide load balancing in the adaptive solution of a Laplace equation on a heterogeneous cluster. We observed a significant reduction in execution time compared to traditional methods.
Game and Balance Multicast Architecture Algorithms for Sensor Grid
Fan, Qingfeng; Wu, Qiongli; Magoulés, Frèdèric; Xiong, Naixue; Vasilakos, Athanasios V.; He, Yanxiang
2009-01-01
We propose a scheme to attain shorter multicast delay and higher efficiency in the data transfer of sensor grid. Our scheme, in one cluster, seeks the central node, calculates the space and the data weight vectors. Then we try to find a new vector composed by linear combination of the two old ones. We use the equal correlation coefficient between the new and old vectors to find the point of game and balance of the space and data factorsbuild a binary simple equation, seek linear parameters, and generate a least weight path tree. We handled the issue from a quantitative way instead of a qualitative way. Based on this idea, we considered the scheme from both the space and data factor, then we built the mathematic model, set up game and balance relationship and finally resolved the linear indexes, according to which we improved the transmission efficiency of sensor grid. Extended simulation results indicate that our scheme attains less average multicast delay and number of links used compared with other well-known existing schemes. PMID:22399992
A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning
Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2013-11-17
In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.
2012-01-01
We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
Genetic Algorithm Used for Load Shedding Based on Sensitivity to Enhance Voltage Stability
NASA Astrophysics Data System (ADS)
Titare, L. S.; Singh, P.; Arya, L. D.
2014-12-01
This paper presents an algorithm to calculate optimum load shedding with voltage stability consideration based on sensitivity of proximity indicator using genetic algorithm (GA). Schur's inequality based proximity indicator of load flow Jacobian has been selected, which indicates system state. Load flow Jacobian of the system is obtained using Continuation power flow method. If reactive power and active rescheduling are exhausted, load shedding is the last line of defense to maintain the operational security of the system. Load buses for load shedding have been selected on the basis of sensitivity of proximity indicator. The load bus having large sensitivity is selected for load shedding. Proposed algorithm predicts load bus rank and optimum load to be shed on load buses. The algorithm accounts inequality constraints not only in present operating conditions, but also for predicted next interval load (with load shedding). Developed algorithm has been implemented on IEEE 6-bus system. Results have been compared with those obtained using Teaching-Learning-Based Optimization (TLBO), particle swarm optimization (PSO) and its variant.
Design and implementation of web server soft load balancing in small and medium-sized enterprise
NASA Astrophysics Data System (ADS)
Yan, Liu
2011-12-01
With the expansion of business scale, small and medium-sized enterprises began to use information platform to improve their management and competition ability, the server becomes the core factor which restricts the enterprise's infomationization construction. This paper puts forward a suitable design scheme for small and medium-sized enterprise web server soft load balancing, and proved it effective through experiment.
Portable Parallel Programming for the Dynamic Load Balancing of Unstructured Grid Applications
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Das, Sajal K.; Harvey, Daniel; Oliker, Leonid
1999-01-01
The ability to dynamically adapt an unstructured -rid (or mesh) is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult, particularly from the view point of portability on various multiprocessor platforms We address this problem by developing PLUM, tin automatic anti architecture-independent framework for adaptive numerical computations in a message-passing environment. Portability is demonstrated by comparing performance on an SP2, an Origin2000, and a T3E, without any code modifications. We also present a general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication pattern, with a goal to providing a global view of system loads across processors. Experiments on, an SP2 and an Origin2000 demonstrate the portability of our approach which achieves superb load balance at the cost of minimal extra overhead.
Parallelization and load balancing of a comprehensive atmospheric chemistry transport model
NASA Astrophysics Data System (ADS)
Elbern, Hendrik
Chemistry transport models are generally claimed to be well suited for massively parallel processing on distributed memory architectures since the arithmetic-to-communication ratio is usually high. However, this observation proves insufficient to account for an efficient parallel performance with increasing complexity of the model. The modeling of the local state of the atmosphere ensues very different branches of the modules' code and greater differences in the computational work load and, consequently, runtime of individual processors occur to a much larger extent during a time step than reported for meteorological models. Variable emissions, changes in actinic fluxes, and all processes associated with cloud modeling are highly variable in time and space and are identified to induce large load imbalances which severely affect the parallel efficiency. This is more so, when the model domain encompasses more heterogeneous meteorological or regional regimes, which impinge dissimilarly on simulations of atmospheric chemistry processes. These conditions hold for the EURAD model applied in this study, which covers the European continental scale as integration domain. Based on a master-worker configuration with a horizontal grid partitioning approach, a method is proposed where the integration domain of the individual processors is locally adjusted to accommodate for load imbalances. This ensures a minimal communication volume and data exchange only with the next neighbors. The interior boundary adjustments of the processors are combined with routine boundary exchange which is required each time step anyway. Two dynamic load balancing schemes were implemented and compared against a conventional equal area partition and a static load balancing scheme. The methods are devised for massively parallel distributed memory computers of both, Single and Multiple Instruction stream Multiple Data stream (SIMD, MIMD) types. A midsummer episode of highly elevated ozone concentrations
Gammon - A load balancing strategy for local computer systems with multiaccess networks
NASA Technical Reports Server (NTRS)
Baumgartner, Katherine M.; Wah, Benjamin W.
1989-01-01
Consideration is given to an efficient load-balancing strategy, Gammon (global allocation from maximum to minimum in constant time), for distributed computing systems connected by multiaccess local area networks. The broadcast capability of these networks is utilized to implement an identification procedure at the applications level for the maximally and the minimally loaded processors. The search technique has an average overhead which is independent of the number of participating stations. An implementation of Gammon on a network of Sun workstations is described. Its performance is found to be better than that of other known methods.
NASA Astrophysics Data System (ADS)
Engelder, Terry; Fischer, Mark P.
1996-05-01
Using the Griffith energy-balance concept to model joint propagation in the brittle crust, two laboratory loading configurations serve as appropriate analogs for in situ conditions: the dead-weight load and the fixed-grips load. The distinction between these loading configurations is based largely on whether or not a loaded boundary moves as a joint grows. During displacement of a loaded boundary, the energy necessary for joint propagation comes from work by the dead weight (i.e., a remote stress). When the loaded boundary remains stationary, as if held by rigid grips, the energy for joint propagation develops upon release of elastic strain energy within the rock mass. These two generic loading configurations serve as models for four common natural loading configurations: a joint-normal load; a thermoelastic load; a fluid load; and an axial load. Each loading configuration triggers a different joint-driving mechanism, each of which is the release of energy through elastic strain and/or work. The four mechanisms for energy release are joint-normal stretching, elastic contraction, poroelastic contraction under either a constant fluid drive or fluid decompression, and axial shortening, respectively. Geological circumstances favoring each of the joint-driving mechanisms are as follows. The release of work under joint-normal stretching occurs whenever layer-parallel extension keeps pace with slow or subcritical joint propagation. Under fixed grips, a substantial crack-normal tensile stress can accumulate by thermoelastic contraction until joint propagation is driven by the release of elastic strain energy. Within the Earth the rate of joint propagation dictates which of these two driving mechanisms operates, with faster propagation driven by release of strain energy. Like a dead-weight load acting to separate the joint walls, pore fluid exerts a traction on the interior of some joints. Joint propagation under fluid loading may be driven by a release of elastic strain
Development of a two wheeled self balancing robot with speech recognition and navigation algorithm
NASA Astrophysics Data System (ADS)
Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh
2016-07-01
This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.
NASA Astrophysics Data System (ADS)
Wang, J.; Samms, T.; Meier, C.; Simmons, L.; Miller, D.; Bathke, D.
2005-12-01
Spatial evapotranspiration (ET) is usually estimated by Surface Energy Balance Algorithm for Land. The average accuracy of the algorithm is 85% on daily basis and 95% on seasonable basis. However, the accuracy of the algorithm varies from 67% to 95% on instantaneous ET estimates and, as reported in 18 studies, 70% to 98% on 1 to 10-day ET estimates. There is a need to understand the sensitivity of the ET calculation with respect to the algorithm variables and equations. With an increased understanding, information can be developed to improve the algorithm, and to better identify the key variables and equations. A Modified Surface Energy Balance Algorithm for Land (MSEBAL) was developed and validated with data from a pecan orchard and an alfalfa field. The MSEBAL uses ground reflectance and temperature data from ASTER sensors along with humidity, wind speed, and solar radiation data from a local weather station. MSEBAL outputs hourly and daily ET with 90 m by 90 m resolution. A sensitivity analysis was conducted for MSEBAL on ET calculation. In order to observe the sensitivity of the calculation to a particular variable, the value of that variable was changed while holding the magnitudes of the other variables. The key variables and equations to which the ET calculation most sensitive were determined in this study. href='http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE">http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
Valiant Load-Balancing: Building Networks That Can Support All Traffic Matrices
NASA Astrophysics Data System (ADS)
Zhang-Shen, Rui
This paper is a brief survey on how Valiant load-balancing (VLB) can be used to build networks that can efficiently and reliably support all traffic matrices. We discuss how to extend VLB to networks with heterogeneous capacities, how to protect against failures in a VLB network, and how to interconnect two VLB networks. For the readers' reference, included also is a list of work that uses VLB in various aspects of networking.
Zemková, E; Štefániková, G; Muyor, J M
2016-08-01
This study investigates test-retest reliability and diagnostic accuracy of the load release balance test under four varied conditions. Young, early and late middle-aged physically active and sedentary subjects performed the test over 2 testing sessions spaced 1week apart while standing on either (1) a stable or (2) an unstable surface with (3) eyes open (EO) and (4) eyes closed (EC), respectively. Results identified that test-retest reliability of parameters of the load release balance test was good to excellent, with high values of ICC (0.78-0.92) and low SEM (7.1%-10.7%). The peak and the time to peak posterior center of pressure (CoP) displacement were significantly lower in physically active as compared to sedentary young adults (21.6% and 21.0%) and early middle-aged adults (22.0% and 20.9%) while standing on a foam surface with EO, and in late middle-aged adults on both unstable (25.6% and 24.5%) and stable support surfaces with EO (20.4% and 20.0%). The area under the ROC curve >0.80 for these variables indicates good discriminatory accuracy. Thus, these variables of the load release balance test measured under unstable conditions have the ability to differentiate between groups of physically active and sedentary adults as early as from 19years of age. PMID:27203382
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
Solar Load Voltage Tracking for Water Pumping: An Algorithm
NASA Astrophysics Data System (ADS)
Kappali, M.; Udayakumar, R. Y.
2014-07-01
Maximum power is to be harnessed from solar photovoltaic (PV) panel to minimize the effective cost of solar energy. This is accomplished by maximum power point tracking (MPPT). There are different methods to realise MPPT. This paper proposes a simple algorithm to implement MPPT lv method in a closed loop environment for centrifugal pump driven by brushed PMDC motor. Simulation testing of the algorithm is done and the results are found to be encouraging and supportive of the proposed method MPPT lv .
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Load identification approach based on basis pursuit denoising algorithm
NASA Astrophysics Data System (ADS)
Ginsberg, D.; Ruby, M.; Fritzen, C. P.
2015-07-01
The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.
A massively parallel adaptive finite element method with dynamic load balancing
Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.
1993-12-31
The authors construct massively parallel adaptive finite element methods for the solution of hyperbolic conservation laws. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. They demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. They present results using adaptive p-refinement to reduce the computational cost of the method, and tiling, a dynamic, element-based data migration system that maintains global load balance of the adaptive method by overlapping neighborhoods of processors that each perform local balancing.
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
A Novel Control algorithm based DSTATCOM for Load Compensation
NASA Astrophysics Data System (ADS)
R, Sreejith; Pindoriya, Naran M.; Srinivasan, Babji
2015-11-01
Distribution Static Compensator (DSTATCOM) has been used as a custom power device for voltage regulation and load compensation in the distribution system. Controlling the switching angle has been the biggest challenge in DSTATCOM. Till date, Proportional Integral (PI) controller is widely used in practice for load compensation due to its simplicity and ability. However, PI Controller fails to perform satisfactorily under parameters variations, nonlinearities, etc. making it very challenging to arrive at best/optimal tuning values for different operating conditions. Fuzzy logic and neural network based controllers require extensive training and perform better under limited perturbations. Model predictive control (MPC) is a powerful control strategy, used in the petrochemical industry and its application has been spread to different fields. MPC can handle various constraints, incorporate system nonlinearities and utilizes the multivariate/univariate model information to provide an optimal control strategy. Though it finds its application extensively in chemical engineering, its utility in power systems is limited due to the high computational effort which is incompatible with the high sampling frequency in these systems. In this paper, we propose a DSTATCOM based on Finite Control Set Model Predictive Control (FCS-MPC) with Instantaneous Symmetrical Component Theory (ISCT) based reference current extraction is proposed for load compensation and Unity Power Factor (UPF) action in current control mode. The proposed controller performance is evaluated for a 3 phase, 3 wire, 415 V, 50 Hz distribution system in MATLAB Simulink which demonstrates its applicability in real life situations.
Agent based modeling of "crowdinforming" as a means of load balancing at emergency departments.
Neighbour, Ryan; Oppenheimer, Luis; Mukhi, Shamir N; Friesen, Marcia R; McLeod, Robert D
2010-01-01
This work extends ongoing development of a framework for modeling the spread of contact-transmission infectious diseases. The framework is built upon Agent Based Modeling (ABM), with emphasis on urban scale modelling integrated with institutional models of hospital emergency departments. The method presented here includes ABM modeling an outbreak of influenza-like illness (ILI) with concomitant surges at hospital emergency departments, and illustrates the preliminary modeling of 'crowdinforming' as an intervention. 'Crowdinforming', a component of 'crowdsourcing', is characterized as the dissemination of collected and processed information back to the 'crowd' via public access. The objective of the simulation is to allow for effective policy evaluation to better inform the public of expected wait times as part of their decision making process in attending an emergency department or clinic. In effect, this is a means of providing additional decision support garnered from a simulation, prior to real world implementation. The conjecture is that more optimal service delivery can be achieved under balanced patient loads, compared to situations where some emergency departments are overextended while others are underutilized. Load balancing optimization is a common notion in many operations, and the simulation illustrates that 'crowdinforming' is a potential tool when used as a process control parameter to balance the load at emergency departments as well as serving as an effective means to direct patients during an ILI outbreak with temporary clinics deployed. The information provided in the 'crowdinforming' model is readily available in a local context, although it requires thoughtful consideration in its interpretation. The extension to a wider dissemination of information via a web service is readily achievable and presents no technical obstacles, although political obstacles may be present. The 'crowdinforming' simulation is not limited to arrivals of patients at
A new evolutionary algorithm with structure mutation for the maximum balanced biclique problem.
Yuan, Bo; Li, Bin; Chen, Huanhuan; Yao, Xin
2015-05-01
The maximum balanced biclique problem (MBBP), an NP-hard combinatorial optimization problem, has been attracting more attention in recent years. Existing node-deletion-based algorithms usually fail to find high-quality solutions due to their easy stagnation in local optima, especially when the scale of the problem grows large. In this paper, a new algorithm for the MBBP, evolutionary algorithm with structure mutation (EA/SM), is proposed. In the EA/SM framework, local search complemented with a repair-assisted restart process is adopted. A new mutation operator, SM, is proposed to enhance the exploration during the local search process. The SM can change the structure of solutions dynamically while keeping their size (fitness) and the feasibility unchanged. It implements a kind of large mutation in the structure space of MBBP to help the algorithm escape from local optima. An MBBP-specific local search operator is designed to improve the quality of solutions efficiently; besides, a new repair-assisted restart process is introduced, in which the Marchiori's heuristic repair is modified to repair every new solution reinitialized by an estimation of distribution algorithm (EDA)-like process. The proposed algorithm is evaluated on a large set of benchmark graphs with various scales and densities. Experimental results show that: 1) EA/SM produces significantly better results than the state-of-the-art heuristic algorithms; 2) it also outperforms a repair-based EDA and a repair-based genetic algorithm on all benchmark graphs; and 3) the advantages of EA/SM are mainly due to the introduction of the new SM operator and the new repair-assisted restart process. PMID:25137737
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture
NASA Astrophysics Data System (ADS)
Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea
2014-05-01
Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.
An analytical algorithm for 3D magnetic field mapping of a watt balance magnet
NASA Astrophysics Data System (ADS)
Fu, Zhuang; Zhang, Zhonghua; Li, Zhengkun; Zhao, Wei; Han, Bing; Lu, Yunfeng; Li, Shisong
2016-04-01
A yoke-based permanent magnet, which has been employed in many watt balances at national metrology institutes, is supposed to generate strong and uniform magnetic field in an air gap in the radial direction. However, in reality the fringe effect due to the finite height of the air gap will introduce an undesired vertical magnetic component to the air gap, which should either be measured or modeled towards some optimizations of the watt balance. A recent publication, i.e. Li et al (2015 Metrologia 52 445), presented a full field mapping method, which in theory will supply useful information for profile characterization and misalignment analysis. This article is an additional material of Li et al (2015 Metrologia 52 445), which develops a different analytical algorithm to represent the 3D magnetic field of a watt balance magnet based on only one measurement for the radial magnetic flux density along the vertical direction, B r (z). The new algorithm is based on the electromagnetic nature of the magnet, which has a much better accuracy.
Enhanced exchange algorithm without detailed balance condition for replica exchange method
NASA Astrophysics Data System (ADS)
Kondo, Hiroko X.; Taiji, Makoto
2013-06-01
The replica exchange method (REM) is a powerful tool for the conformational sampling of biomolecules. In this study, we propose an enhanced exchange algorithm for REM not meeting the detailed balance condition (DBC), but satisfying the balance condition in all considered exchanges between two replicas. Breaking the DBC can minimize the rejection rate and make an exchange process rejection-free as the number of replicas increases. To enhance the efficiency of REM, all possible pairs—not only the nearest neighbor—were considered in the exchange process. The test simulations of the alanine dipeptide confirmed the correctness of our method. The average traveling distance of each replica in the temperature distribution was also increased in proportion to an increase in the exchange rate. Furthermore, we applied our algorithm to the conformational sampling of the 10-residue miniprotein, chignolin, with an implicit solvent model. The results showed a faster convergence in the calculation of its free energy landscape, compared to that achieved using the normal exchange method of adjacent pairs. This algorithm can also be applied to the conventional near neighbor method and is expected to reduce the required number of replicas.
NASA Astrophysics Data System (ADS)
Ghani Abro, Abdul; Mohamad-Saleh, Junita
2014-10-01
The prime motive of economic load dispatch (ELD) is to optimize the production cost of electrical power generation through appropriate division of load demand among online generating units. Bio-inspired optimization algorithms have outperformed classical techniques for optimizing the production cost. Probability-selection artificial bee colony (PS-ABC) algorithm is a recently proposed variant of ABC optimization algorithm. PS-ABC generates optimal solutions using three different mutation equations simultaneously. The results show improved performance of PS-ABC over the ABC algorithm. Nevertheless, all the mutation equations of PS-ABC are excessively self-reinforced and, hence, PS-ABC is prone to premature convergence. Therefore, this research work has replaced the mutation equations and has improved the scout-bee stage of PS-ABC for enhancing the algorithm's performance. The proposed algorithm has been compared with many ABC variants and numerous other optimization algorithms on benchmark functions and ELD test cases. The adapted ELD test cases comprise of transmission losses, multiple-fuel effect, valve-point effect and toxic gases emission constraints. The results reveal that the proposed algorithm has the best capability to yield the optimal solution for the problem among the compared algorithms.
A load balancing bufferless deflection router for network-on-chip
NASA Astrophysics Data System (ADS)
Xiaofeng, Zhou; Zhangming, Zhu; Duan, Zhou
2016-07-01
The bufferless router emerges as an interesting option for cost-efficient in network-on-chip (NoC) design. However, the bufferless router only works well under low network load because deflection more easily occurs as the injection rate increases. In this paper, we propose a load balancing bufferless deflection router (LBBDR) for NoC that relieves the effect of deflection in bufferless NoC. The proposed LBBDR employs a balance toggle identifier in the source router to control the initial routing direction of X or Y for a flit in the network. Based on this mechanism, the flit is routed according to XY or YX routing in the network afterward. When two or more flits contend the same one desired output port a priority policy called nearer-first is used to address output ports allocation contention. Simulation results show that the proposed LBBDR yields an improvement of routing performance over the reported bufferless routing in the flit deflection rate, average packet latency and throughput by up to 13%, 10% and 6% respectively. The layout area and power consumption compared with the reported schemes are 12% and 7% less respectively. Project supported by the National Natural Science Foundation of China (Nos. 61474087, 61322405, 61376039).
Senay, Gabriel B.
2008-01-01
The main objective of this study is to present an improved modeling technique called Vegetation ET (VegET) that integrates commonly used water balance algorithms with remotely sensed Land Surface Phenology (LSP) parameter to conduct operational vegetation water balance modeling of rainfed systems at the LSP’s spatial scale using readily available global data sets. Evaluation of the VegET model was conducted using Flux Tower data and two-year simulation for the conterminous US. The VegET model is capable of estimating actual evapotranspiration (ETa) of rainfed crops and other vegetation types at the spatial resolution of the LSP on a daily basis, replacing the need to estimate crop- and region-specific crop coefficients.
van Loosdregt, Inge A E W; Argento, Giulia; Driessen-Mol, Anita; Oomens, Cees W J; Baaijens, Frank P T
2014-06-27
Preclinical studies of tissue-engineered heart valves (TEHVs) showed retraction of the heart valve leaflets as major failure of function mechanism. This retraction is caused by both passive and active cell stress and passive matrix stress. Cell-mediated retraction induces leaflet shortening that may be counteracted by the hemodynamic loading of the leaflets during diastole. To get insight into this stress balance, the amount and duration of stress generation in engineered heart valve tissue and the stress imposed by physiological hemodynamic loading are quantified via an experimental and a computational approach, respectively. Stress generation by cells was measured using an earlier described in vitro model system, mimicking the culture process of TEHVs. The stress imposed by the blood pressure during diastole on a valve leaflet was determined using finite element modeling. Results show that for both pulmonary and systemic pressure, the stress imposed on the TEHV leaflets is comparable to the stress generated in the leaflets. As the stresses are of similar magnitude, it is likely that the imposed stress cannot counteract the generated stress, in particular when taking into account that hemodynamic loading is only imposed during diastole. This study provides a rational explanation for the retraction found in preclinical studies of TEHVs and represents an important step towards understanding the retraction process seen in TEHVs by a combined experimental and computational approach. PMID:24268314
Berg, Jonathan Charles; Halse, Chris; Crowther, Ashley; Barlas, Thanasis; Wilson, David Gerald; Berg, Dale E.; Resor, Brian Ray
2010-06-01
Prior work on active aerodynamic load control (AALC) of wind turbine blades has demonstrated that appropriate use of this technology has the potential to yield significant reductions in blade loads, leading to a decrease in wind cost of energy. While the general concept of AALC is usually discussed in the context of multiple sensors and active control devices (such as flaps) distributed over the length of the blade, most work to date has been limited to consideration of a single control device per blade with very basic Proportional Derivative controllers, due to limitations in the aeroservoelastic codes used to perform turbine simulations. This work utilizes a new aeroservoelastic code developed at Delft University of Technology to model the NREL/Upwind 5 MW wind turbine to investigate the relative advantage of utilizing multiple-device AALC. System identification techniques are used to identify the frequencies and shapes of turbine vibration modes, and these are used with modern control techniques to develop both Single-Input Single-Output (SISO) and Multiple-Input Multiple-Output (MIMO) LQR flap controllers. Comparison of simulation results with these controllers shows that the MIMO controller does yield some improvement over the SISO controller in fatigue load reduction, but additional improvement is possible with further refinement. In addition, a preliminary investigation shows that AALC has the potential to reduce off-axis gearbox loads, leading to reduced gearbox bearing fatigue damage and improved lifetimes.
A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2016-01-01
A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition.
PARALLEL IMPLEMENTATION OF THE TOPAZ OPACITY CODE: ISSUES IN LOAD-BALANCING
Sonnad, V; Iglesias, C A
2008-05-12
The TOPAZ opacity code explicitly includes configuration term structure in the calculation of bound-bound radiative transitions. This approach involves myriad spectral lines and requires the large computational capabilities of parallel processing computers. It is important, however, to make use of these resources efficiently. For example, an increase in the number of processors should yield a comparable reduction in computational time. This proportional 'speedup' indicates that very large problems can be addressed with massively parallel computers. Opacity codes can readily take advantage of parallel architecture since many intermediate calculations are independent. On the other hand, since the different tasks entail significantly disparate computational effort, load-balancing issues emerge so that parallel efficiency does not occur naturally. Several schemes to distribute the labor among processors are discussed.
Meta-heuristic algorithm to solve two-sided assembly line balancing problems
NASA Astrophysics Data System (ADS)
Wirawan, A. D.; Maruf, A.
2016-02-01
Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.
Gong, Li-gang; Yang, Wen-lun
2014-01-01
Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555
Li, Bai; Gong, Li-gang; Yang, Wen-lun
2014-01-01
Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555
Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes
Garrett, W.R.
1981-08-04
A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Preferably the spring means itself is a double acting compression spring means wherein the same spring means is compressed whether the joint is extended or contracted. The damper has a like low spring rate over a considerable range of deflection, both upon extension and contraction of the joint, but a gradually then rapidly increased spring rate upon approaching the travel limits in each direction. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The spring rings make only such line contact with one of the telescoping members as is required for guidance therefrom, and no contact with the other member. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. Magnetic and electrical means are provided to check for the presence and condition of the lubricant. To increase load capacity the spring means is made of a number of components acting in parallel.
NASA Astrophysics Data System (ADS)
Alfredsen, K. T.; Killingtveit, A.
2011-12-01
About 99% of the total energy production in Norway comes from hydropower, and the total production of about 120 TWh makes Norway Europe's largest hydropower producer. Most hydropower systems in Norway are based on high-head plants with mountain storage reservoirs and tunnels transporting water from the reservoirs to the power plants. In total, Norwegian reservoirs contributes around 50% of the total energy storage capacity in Europe. Current strategies to reduce emission of greenhouse gases from energy production involve increased focus on renewable energy sources, e.g. the European Union's 202020 goal in which renewable energy sources should be 20% of the total energy production by 2020. To meet this goal new renewable energy installations must be developed on a large scale in the coming years, and wind power is the main focus for new developments. Hydropower can contribute directly to increase renewable energy through new development or extensions to existing systems, but maybe even more important is the potential to use hydropower systems with storage for load balancing in a system with increased amount of non-storable renewable energies. Even if new storage technologies are under development, hydro storage is the only technology available on a large scale and the most economical feasible alternative. In this respect the Norwegian system has a high potential both through direct use of existing reservoirs and through an increased development of pump storage plants utilizing surplus wind energy to pump water and then producing during periods with low wind input. Through cables to Europe, Norwegian hydropower could also provide balance power for the North European market. Increased peaking and more variable operation of the current hydropower system will present a number of technical and environmental challenges that needs to be identified and mitigated. A more variable production will lead to fluctuating flow in receiving rivers and reservoirs, and it will also
Physics Based Model for Cryogenic Chilldown and Loading. Part I: Algorithm
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Smelyanskiy, Vadim N.; Brown, Barbara
2014-01-01
We report the progress in the development of the physics based model for cryogenic chilldown and loading. The chilldown and loading is model as fully separated non-equilibrium two-phase flow of cryogenic fluid thermally coupled to the pipe walls. The solution follow closely nearly-implicit and semi-implicit algorithms developed for autonomous control of thermal-hydraulic systems developed by Idaho National Laboratory. A special attention is paid to the treatment of instabilities. The model is applied to the analysis of chilldown in rapid loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The numerical predictions are in reasonable agreement with the experimental time traces. The obtained results pave the way to the development of autonomous loading operation on the ground and space.
Multicycle Optimization of Advanced Gas-Cooled Reactor Loading Patterns Using Genetic Algorithms
Ziver, A. Kemal; Carter, Jonathan N.; Pain, Christopher C.; Oliveira, Cassiano R.E. de; Goddard, Antony J. H.; Overton, Richard S.
2003-02-15
A genetic algorithm (GA)-based optimizer (GAOPT) has been developed for in-core fuel management of advanced gas-cooled reactors (AGRs) at HINKLEY B and HARTLEPOOL, which employ on-load and off-load refueling, respectively. The optimizer has been linked to the reactor analysis code PANTHER for the automated evaluation of loading patterns in a two-dimensional geometry, which is collapsed from the three-dimensional reactor model. GAOPT uses a directed stochastic (Monte Carlo) algorithm to generate initial population members, within predetermined constraints, for use in GAs, which apply the standard genetic operators: selection by tournament, crossover, and mutation. The GAOPT is able to generate and optimize loading patterns for successive reactor cycles (multicycle) within acceptable CPU times even on single-processor systems. The algorithm allows radial shuffling of fuel assemblies in a multicycle refueling optimization, which is constructed to aid long-term core management planning decisions. This paper presents the application of the GA-based optimization to two AGR stations, which apply different in-core management operational rules. Results obtained from the testing of GAOPT are discussed.
Soft tissue balancing in varus total knee arthroplasty: an algorithmic approach.
Verdonk, Peter C M; Pernin, Jerome; Pinaroli, Alban; Ait Si Selmi, Tarik; Neyret, Philippe
2009-06-01
We present an algorithmic release approach to the varus knee, including a novel pie crust release technique of the superficial MCL, in 359 total knee arthroplasty patients and report the clinical and radiological outcome. Medio-lateral stability was evaluated as normal in 97% of group 0 (deep MCL), 95% of group 1 (pie crust superficial MCL) and 83% of group 2 (distal superficial MCL). The mean preoperative hip-knee angle was 174.0, 172.1, and 169.5 and was corrected postoperatively to 179.1, 179.2, and 177.6 for groups 0, 1, and 2, respectively. A satisfactory correction in the coronal plane was achieved in 82.9% of all-comers falling within the 180 degrees +/- 3 degrees interval. An algorithmic release approach can be beneficial for soft tissue balancing. In all patients, the deep medial collateral ligament should be released and otseophytes removed. The novel pie crust technique of the superficial MCL is safe, efficient and reliable, provided a medial release of 6-8 mm or less is required. The release of the superficial MCL on the distal tibia is advocated in severe varus knees. Preoperative coronal alignment is an important predictor for the release technique, but should be combined with other parameters such as reducibility of the deformity and the obtained gap asymmetry. PMID:19290507
Simultaneous optimization of the cavity heat load and trip rates in linacs using a genetic algorithm
NASA Astrophysics Data System (ADS)
Terzić, Balša; Hofler, Alicia S.; Reeves, Cody J.; Khan, Sabbir A.; Krafft, Geoffrey A.; Benesch, Jay; Freyberger, Arne; Ranjan, Desh
2014-10-01
In this paper, a genetic algorithm-based optimization is used to simultaneously minimize two competing objectives guiding the operation of the Jefferson Lab's Continuous Electron Beam Accelerator Facility linacs: cavity heat load and radio frequency cavity trip rates. The results represent a significant improvement to the standard linac energy management tool and thereby could lead to a more efficient Continuous Electron Beam Accelerator Facility configuration. This study also serves as a proof of principle of how a genetic algorithm can be used for optimizing other linac-based machines.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Fawley, William M.
2002-07-01
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Harteveld, Casper
At many occasions we are asked to achieve a “balance” in our lives: when it comes, for example, to work and food. Balancing is crucial in game design as well as many have pointed out. In games with a meaningful purpose, however, balancing is remarkably different. It involves the balancing of three different worlds, the worlds of Reality, Meaning, and Play. From the experience of designing Levee Patroller, I observed that different types of tensions can come into existence that require balancing. It is possible to conceive of within-worlds dilemmas, between-worlds dilemmas, and trilemmas. The first, the within-world dilemmas, only take place within one of the worlds. We can think, for example, of a user interface problem which just relates to the world of Play. The second, the between-worlds dilemmas, have to do with a tension in which two worlds are predominantly involved. Choosing between a cartoon or a realistic style concerns, for instance, a tension between Reality and Play. Finally, the trilemmas are those in which all three worlds play an important role. For each of the types of tensions, I will give in this level a concrete example from the development of Levee Patroller. Although these examples come from just one game, I think the examples can be exemplary for other game development projects as they may represent stereotypical tensions. Therefore, to achieve harmony in any of these forthcoming games, it is worthwhile to study the struggles we had to deal with.
NASA Astrophysics Data System (ADS)
Kizilkaya, Elif A.; Gupta, Surendra M.
2005-11-01
In this paper, we compare the impact of different disassembly line balancing (DLB) algorithms on the performance of our recently introduced Dynamic Kanban System for Disassembly Line (DKSDL) to accommodate the vagaries of uncertainties associated with disassembly and remanufacturing processing. We consider a case study to illustrate the impact of various DLB algorithms on the DKSDL. The approach to the solution, scenario settings, results and the discussions of the results are included.
Access Load Balancing with Analogy to Thermal Diffusion for Dynamic P2P File-Sharing Environments
NASA Astrophysics Data System (ADS)
Takaoka, Masanori; Uchida, Masato; Ohnishi, Kei; Oie, Yuji
In this paper, we propose a file replication method to achieve load balancing in terms of write access to storage device (“write storage access load balancing” for short) in unstructured peer-to-peer (P2P) file-sharing networks in which the popularity trend of queried files varies dynamically. The proposed method uses a write storage access ratio as a load balance index value in order to stabilize dynamic P2P file-sharing environments adaptively. In the proposed method, each peer autonomously controls the file replication ratio, which is defined as a probability to create the replica of the file in order to uniform write storage access loads in the similar way to thermal diffusion phenomena. Theoretical analysis results show that the behavior of the proposed method actually has an analogy to a thermal diffusion equation. In addition, simulation results reveal that the proposed method has an ability to realize write storage access load balancing in the dynamic P2P file-sharing environments.
Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes
Garrett, W.R.
1984-03-06
A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller Belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. A prototype includes of this a bellows seal instead of the floating seal at the upper end of the tool, and a bellows in the side of the lubricant chamber provides volume compensation. A second lubricant chamber is provided below the pressure seal, the lower end of the second chamber being closed by a bellows seal and a further bellows in the side of the second chamber providing volume compensation. Modifications provide hydraulic jars.
NASA Astrophysics Data System (ADS)
Pitakaso, Rapeepan; Sethanan, Kanchana
2016-02-01
This article proposes the differential evolution algorithm (DE) and the modified differential evolution algorithm (DE-C) to solve a simple assembly line balancing problem type 1 (SALBP-1) and SALBP-1 when the maximum number of machine types in a workstation is considered (SALBP-1M). The proposed algorithms are tested and compared with existing effective heuristics using various sets of test instances found in the literature. The computational results show that the proposed heuristics is one of the best methods, compared with the other approaches.
Cain, Stephen M; McGinnis, Ryan S; Davidson, Steven P; Vitali, Rachel V; Perkins, Noel C; McLean, Scott G
2016-01-01
We utilize an array of wireless inertial measurement units (IMUs) to measure the movements of subjects (n=30) traversing an outdoor balance beam (zigzag and sloping) as quickly as possible both with and without load (20.5kg). Our objectives are: (1) to use IMU array data to calculate metrics that quantify performance (speed and stability) and (2) to investigate the effects of load on performance. We hypothesize that added load significantly decreases subject speed yet results in increased stability of subject movements. We propose and evaluate five performance metrics: (1) time to cross beam (less time=more speed), (2) percentage of total time spent in double support (more double support time=more stable), (3) stride duration (longer stride duration=more stable), (4) ratio of sacrum M-L to A-P acceleration (lower ratio=less lateral balance corrections=more stable), and (5) M-L torso range of motion (smaller range of motion=less balance corrections=more stable). We find that the total time to cross the beam increases with load (t=4.85, p<0.001). Stability metrics also change significantly with load, all indicating increased stability. In particular, double support time increases (t=6.04, p<0.001), stride duration increases (t=3.436, p=0.002), the ratio of sacrum acceleration RMS decreases (t=-5.56, p<0.001), and the M-L torso lean range of motion decreases (t=-2.82, p=0.009). Overall, the IMU array successfully measures subject movement and gait parameters that reveal the trade-off between speed and stability in this highly dynamic balance task. PMID:26669954
Parallel load balancing strategy for Volume-of-Fluid methods on 3-D unstructured meshes
NASA Astrophysics Data System (ADS)
Jofre, Lluís; Borrell, Ricard; Lehmkuhl, Oriol; Oliva, Assensi
2015-02-01
Volume-of-Fluid (VOF) is one of the methods of choice to reproduce the interface motion in the simulation of multi-fluid flows. One of its main strengths is its accuracy in capturing sharp interface geometries, although requiring for it a number of geometric calculations. Under these circumstances, achieving parallel performance on current supercomputers is a must. The main obstacle for the parallelization is that the computing costs are concentrated only in the discrete elements that lie on the interface between fluids. Consequently, if the interface is not homogeneously distributed throughout the domain, standard domain decomposition (DD) strategies lead to imbalanced workload distributions. In this paper, we present a new parallelization strategy for general unstructured VOF solvers, based on a dynamic load balancing process complementary to the underlying DD. Its parallel efficiency has been analyzed and compared to the DD one using up to 1024 CPU-cores on an Intel SandyBridge based supercomputer. The results obtained on the solution of several artificially generated test cases show a speedup of up to ∼12× with respect to the standard DD, depending on the interface size, the initial distribution and the number of parallel processes engaged. Moreover, the new parallelization strategy presented is of general purpose, therefore, it could be used to parallelize any VOF solver without requiring changes on the coupled flow solver. Finally, note that although designed for the VOF method, our approach could be easily adapted to other interface-capturing methods, such as the Level-Set, which may present similar workload imbalances.
NASA Astrophysics Data System (ADS)
Esin, S. B.; Trifonov, N. N.; Sukhorukov, Yu. G.; Yurchenko, A. Yu.; Grigor'eva, E. B.; Snegin, I. P.; Zhivykh, D. A.; Medvedkin, A. V.; Ryabich, V. A.
2015-09-01
More than 30 power units of thermal power stations, based on the nondeaerating heat balance diagram, successfully operate in the former Soviet Union. Most of them are power units with a power of 300 MW, equipped with HTGZ and LMZ turbines. They operate according to a variable electric load curve characterized by deep reductions when undergoing night minimums. Additional extension of the range of power unit adjustment makes it possible to maintain the dispatch load curve and obtain profit for the electric power plant. The objective of this research is to carry out estimated and experimental processing of the operating regimes of the regeneration system of steam-turbine plants within the extended adjustment range and under the conditions when the constraints on the regeneration system and its equipment are removed. Constraints concerning the heat balance diagram that reduce the power unit efficiency when extending the adjustment range have been considered. Test results are presented for the nondeaerating heat balance diagram with the HTGZ turbine. Turbine pump and feed electric pump operation was studied at a power unit load of 120-300 MW. The reliability of feed pump operation is confirmed by a stable vibratory condition and the absence of cavitation noise and vibration at a frequency that characterizes the cavitation condition, as well as by oil temperature maintenance after bearings within normal limits. Cavitation performance of pumps in the studied range of their operation has been determined. Technical solutions are proposed on providing a profitable and stable operation of regeneration systems when extending the range of adjustment of power unit load. A nondeaerating diagram of high-pressure preheater (HPP) condensate discharge to the mixer. A regeneration system has been developed and studied on the operating power unit fitted with a deaeratorless thermal circuit of the system for removing the high-pressure preheater heating steam condensate to the mixer
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
Finding the ILL Load Balance: Quality and Quantity in the 1990's.
ERIC Educational Resources Information Center
Harer, John B.; Robbins, Rachel
1997-01-01
The desire for load leveling in interlibrary loans arises from a concern for fairness most often expressed by net lending institutions, libraries that lend more than they borrow. This article examines interlibrary loans; strategies for load leveling; and "best lending partner," a total quality management model for load leveling. (PEN)
A remote sensing surface energy balance algorithm for land (SEBAL).. Part 2: Validation
NASA Astrophysics Data System (ADS)
Bastiaanssen, W. G. M.; Pelgrum, H.; Wang, J.; Ma, Y.; Moreno, J. F.; Roerink, G. J.; van der Wal, T.
1998-12-01
The surface fluxes obtained with the Surface Energy Balance Algorithm for Land (SEBAL), using remote sensing information and limited input data from the field were validated with data available from the large-scale field experiments EFEDA (Spain), HAPEX-Sahel (Niger) and HEIFE (China). In 85% of the cases where field scale surface flux ratios were compared with SEBAL-based surface flux ratios, the differences were within the range of instrumental inaccuracies. Without any calibration procedure, the root mean square error of the evaporative fraction Λ (latent heat flux/net available radiation) for footprints of a few hundred metres varied from Λ RMSE=0.10 to 0.20. Aggregation of several footprints to a length scale of a few kilometres reduced the overall error to five percent. Fluxes measured by aircraft during EFEDA were used to study the correctness of remote sensed watershed fluxes (1 000 000 ha): The overall difference in evaporative fraction was negligible. For the Sahelian landscape in Niger, observed differences were larger (15%), which could be attributed to the rapid moisture depletion of the coarse textured soils between the moment of image acquisition (18 September 1992) and the moment of in situ flux analysis (17 September 1992). For HEIFE, the average difference in SEBAL estimated and ground verified surface fluxes was 23 W m -2, which, considering that surface fluxes were not used for calibration, is encouraging. SEBAL estimates of evaporation from the subsealevel Qattara Depression in Egypt (2 000 000 ha) were consistent with the numerically predicted discharge from the groundwater system. In Egypt's Nile Delta, the evaporation from a distributed field scale water balance model at a 700 000 ha irrigated agricultural region led to difference of 5% with daily evaporative fluxes obtained from SEBAL. It is concluded that, for all study areas in arid zones, the errors average out if a larger number of pixels is considered. Part 1 of this paper
Soewono, C. N.; Takaki, N.
2012-07-01
In this work genetic algorithm was proposed to solve fuel loading pattern optimization problem in thorium fueled heavy water reactor. The objective function of optimization was to maximize the conversion ratio and minimize power peaking factor. Those objectives were simultaneously optimized using non-dominated Pareto-based population ranking optimal method. Members of non-dominated population were assigned selection probabilities based on their rankings in a manner similar to Baker's single criterion ranking selection procedure. A selected non-dominated member was bred through simple mutation or one-point crossover process to produce a new member. The genetic algorithm program was developed in FORTRAN 90 while neutronic calculation and analysis was done by COREBN code, a module of core burn-up calculation for SRAC. (authors)
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
DESIGN NOTE: A low interaction two-axis wind tunnel force balance designed for large off-axis loads
NASA Astrophysics Data System (ADS)
Ostafichuk, Peter M.; Green, Sheldon I.
2002-10-01
A novel two-axis wind tunnel force balance using air bushings for off-axis load compensation has been developed. The design offers a compact, robust, and versatile option for precisely measuring horizontal force components irrespective of vertical and moment loads. Two independent stages of cylindrical bushings support large moments and vertical force; there is low interaction due to the minimal friction along the horizontal measurement axes. The current design measures drag and side forces up to 70 N and can safely operate in the presence of vertical loads as large as 2200 N and moment loads up to 425, 750, and 425 N m in roll, pitch, and yaw, respectively. Eleven drag axis calibration trials were conducted with a variety of applied vertical forces and pitching moments. The individual linear calibration slopes for the trials agreed to within 0.18% and the largest residual from all calibrations was 0.38% of full scale. As the residuals were found to obey a normal distribution, with 99% certainty the expected drag resolution of the device is better than 0.30% of full scale, independent of off-axis loads.
NASA Astrophysics Data System (ADS)
Williams, Mark R.; King, Kevin W.; Macrae, Merrin L.; Ford, William; Van Esbroeck, Chris; Brunke, Richard I.; English, Michael C.; Schiff, Sherry L.
2015-11-01
Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of nutrient load estimates in large, naturally drained watersheds, few studies have focused on tile-drained fields and small tile-drained headwater watersheds. The objective of this study was to quantify uncertainty in annual dissolved reactive phosphorus (DRP) and nitrate-nitrogen (NO3-N) load estimates from four tile-drained fields and two small tile-drained headwater watersheds in Ohio, USA and Ontario, Canada. High temporal resolution datasets of discharge (10-30 min) and nutrient concentration (2 h to 1 d) were collected over a 1-2 year period at each site and used to calculate a reference nutrient load. Monte Carlo simulations were used to subsample the measured data to assess the effects of sample frequency, calculation algorithm, and compositing strategy on the uncertainty of load estimates. Results showed that uncertainty in annual DRP and NO3-N load estimates was influenced by both the sampling interval and the load estimation algorithm. Uncertainty in annual nutrient load estimates increased with increasing sampling interval for all of the load estimation algorithms tested. Continuous discharge measurements and linear interpolation of nutrient concentrations yielded the least amount of uncertainty, but still tended to underestimate the reference load. Compositing strategies generally improved the precision of load estimates compared to discrete grab samples; however, they often reduced the accuracy. Based on the results of this study, we recommended that nutrient concentration be measured every 13-26 h for DRP and every 2.7-17.5 d for NO3-N in tile-drained fields and small tile-drained headwater watersheds to accurately (±10%) estimate annual loads.
Estimating sediment loads in an intra-Apennine catchments: balance between modeling and monitoring
NASA Astrophysics Data System (ADS)
Pelacani, Samanta; Cassi, Paola; Borselli, Lorenzo
2010-05-01
In this study we compare the results of a soil erosion model applied at watershed scale to the suspended sediment measured in the stream network affected by a motor way construction. A sediment delivery model is applied at watershed scale; the evaluation of sediment delivery is related to a connectivity fluxes index that describes the internal linkages between runoff and sediment sources in upper parts of catchments and the receiving sinks. An analysis of the fine suspended sediment transport and storage was conducted for a streams inlet of the Bilancino reservoir, a principal water supply of the city of Florence. The suspended sediment were collected from a section of river defined as a close systems using a time integrating suspended sediment sampling. The sediment deposited within the sampling traps was recovered after storm events and provide information of the overall contribution of the potential sediment sources. Hillslope gross erosion was assessed by a USLE-TYPE approach. A soil survey at 1:25.000 scale and a soil database was create to calculate, for each soil unit, the erodibility coefficient K using a new algorithm (Salvador Sanchis et al. 2007). Erosivity coefficient R was obtained applying geostatistical methods taking into account elevation and valley morphology. Furthermore, we evaluate a sediment delivery factor (SDR) for the entire watershed. This factor is used to correct the output of the USLE Type model. The innovative approach consist in a SDR factor variable in space and in time because it is related to a fluxes connectivity index IC (Borselli et al. 2008) based on the distribution of land use and topographic features. The aim of this study is to understand how the model simulates the real processes that intervene in the watershed and subsequently to calibrate the model with the result obtained from the monitoring of suspend sediment in the streams. From first results, it appears that human activities by highway construction, have resulted in
NASA Astrophysics Data System (ADS)
Muraleedharan, Rajani
2011-06-01
The future of metering networks requires adaptation of different sensor technology while reducing energy exploitation. In this paper, a routing protocol with the ability to adapt and communicate reliably over varied IEEE standards is proposed. Due to sensor's resource constraints, such as memory, energy, processing power an algorithm that balances resources without compromising performance is preferred. The proposed A-PEARL protocol is tested under harsh simulated scenarios such as sensor failure and fading conditions. The inherent features of A-PEARL protocol such as data aggregation, fusion and channel hopping enables minimal resource consumption and secure communication.
NASA Astrophysics Data System (ADS)
Tufford, D. L.; Samadi, S.; Carbone, G. J.
2013-12-01
Recent studies have highlighted the potential challenges in US southeastern watersheds from climate variability. There may be shifts in water balance due to complexity of the flow generation processes that determine how water is partitioned in these landscapes. The main objective of this study was to capture the feedback relationships among the water balance components using the Soil & Water Assessment Tool (SWAT) watershed-scale streamflow model linked with the Sequential Uncertainty FItting (SUFI-2) and Particle Swarm Optimization (PSO) parameter uncertainty algorithms in the Waccamaw River watershed, a low-gradient forested watershed on the Coastal Plain of the southeastern United States. Streamflow water balance uncertainty analysis suggested close correspondence of the model with the physical behavior and system dynamics during different hydroclimatological periods in the 2003-2007 calibration interval. SUFI-2 water balance analysis revealed that surface runoff, ground water, and lateral flow contributed 22.2%, 3.9% and 0.4% of the total water yield during simulation period while PSO analysis indicated 16.7%, 13.2% and 0.3% of their contributions respectively. Both uncertainty methods found that 71.1% of the total rainfall was lost by evapotranspiration during the simulation interval. The total water yields using both algorithms were over predicted by up to 14.0% of the annual rainfall inputs during dry period (2007) which was related to extra contribution of shallow aquifer flow to the river system. Both algorithms also specified that surface flow and ground water runoff dominated water balance during October and December in overall prediction interval respectively. Moreover, evaluating parameter uncertainty and error indicated that the distribution of prediction uncertainty was least in the wet year (2006) and was most towards the end of dry period particularly within alluvial riparian floodplains. Water balance estimation with uncertainty quantification can
NASA Technical Reports Server (NTRS)
Woods, Claudia M.; Brewe, David E.
1988-01-01
A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.
NASA Astrophysics Data System (ADS)
Koloch, Grzegorz; Kaminski, Bogumil
2010-10-01
In the paper we examine a modification of the classical Vehicle Routing Problem (VRP) in which shapes of transported cargo are accounted for. This problem, known as a three-dimensional VRP with loading constraints (3D-VRP), is appropriate when transported commodities are not perfectly divisible, but they have fixed and heterogeneous dimensions. In the paper restrictions on allowable cargo positionings are also considered. These restrictions are derived from business practice and they extended the baseline 3D-VRP formulation as considered by Koloch and Kaminski (2010). In particular, we investigate how additional restrictions influence relative performance of two proposed optimization algorithms: the nested and the joint one. Performance of both methods is compared on artificial problems and on a big-scale real life case study.
NASA Technical Reports Server (NTRS)
Woods, C. M.; Brewe, D. E.
1989-01-01
A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.
Multi-Objective Optimization of Heat Load and Run Time for CEBAF Linacs Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Reeves, Cody; Terzic, Balsa; Hofler, Alicia
2014-09-01
The Continuous Electron Beam Accelerator Facility (CEBAF) consists of two linear accelerators (Linacs) connected by arcs. Within each Linac, there are 200 niobium cavities that use superconducting radio frequency (SRF) to accelerate electrons. The gradients for the cavities are selected to optimize two competing objectives: heat load (the energy required to cool the cavities) and trip rate (how often the beam turns off within an hour). This results in a multidimensional, multi-objective, nonlinear system of equations that is not readily solved by analytical methods. This study improved a genetic algorithm (GA), which applies the concept of natural selection. The primary focus was making this GA more efficient to allow for more cost-effective solutions in the same amount of computation time. Two methods used were constraining the maximum value of the ob-jectives and also utilizing previously simulated solutions as the initial generation. A third method of interest involved refining the GA by combining the two objectives into a single weighted-sum objective, which collapses the set of optimal solutions into a single point. By combining these methods, the GA can be made 128 times as effective, reducing computation time from 30 min to 12 sec. This is crucial for when a cavity must be turned off, a new solution needs to be computed quickly. This work is of particular interest since it provides an efficient algorithm that can be easily adapted to any Linac facility.
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
Zimmermann, Frauke; Schwenninger, Christoph; Nolten, Ulrich; Firmbach, Franz Peter; Elfring, Robert; Radermacher, Klaus
2012-08-01
Preservation and recovery of the mechanical leg axis as well as good rotational alignment of the prosthesis components and well-balanced ligaments are essential for the longevity of total knee arthroplasty (TKA). In the framework of the OrthoMIT project, the genALIGN system, a new navigated implantation approach based on intra-operative force-torque measurements, has been developed. With this system, optical or magnetic position tracking as well as any fixation of invasive rigid bodies are no longer necessary. For the alignment of the femoral component along the mechanical axis, a sensor-integrated instrument measures the torques resulting from the deviation between the instrument's axis and the mechanical axis under manually applied axial compression load. When both axes are coaxial, the resulting torques equal zero, and the tool axis can be fixed with respect to the bone. For ligament balancing and rotational alignment of the femoral component, the genALIGN system comprises a sensor-integrated tibial trial inlay measuring the amplitude and application points of the forces transferred between femur and tibia. Hereby, the impact of ligament tensions on knee joint loads can be determined over the whole range of motion. First studies with the genALIGN system, including a comparison with an imageless navigation system, show the feasibility of the concept. PMID:22868781
Chen, Yousu; Huang, Zhenyu; Rice, Mark J.
2012-12-27
Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques hold the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.
NASA Astrophysics Data System (ADS)
Xie, H.; Hendrickx, J.; Kurc, S.; Small, E.
2002-12-01
Evapotranspiration (ET) is one of the most important components of the water balance, but also one of the most difficult to measure. Field techniques such as soil water balances and Bowen ratio or eddy covariance techniques are local, ranging from point to field scale. SEBAL (Surface Energy Balance Algorithm for Land) is an image-processing model that calculates ET and other energy exchanges at the earth's surface. SEBAL uses satellite image data (TM/ETM+, MODIS, AVHRR, ASTER, and so on) measuring visible, near-infrared, and thermal infrared radiation. SEBAL algorithms predict a complete radiation and energy balance for the surface along with fluxes of sensible heat and aerodynamic surface roughness (Bastiaanssen et al, 1998; and Allen et al. 2001). We are constructing a GIS based database that includes spatially-distributed estimates of ET from remote-sensed data at a resolution of about 30 m. The SEBAL code will be optimized for this region via comparison of surface based observations of ET, reference ET (from windspeed, solar radiation, humidity, air temperature, and rainfall records), surface temperature, albedo, and so on. The observed data is collected at a series of tower in the middle Rio Grande Basin. The satellite image provides the instantaneous ET (ET_inst) only. Therefore, estimating 24 hour ET (ET_24) requires some assumptions. Two of these assumptions, which are (1) by assuming the instantaneous evaporative fraction (EF) is equal to the 24-hour averaged value, and (2) by assuming the instantaneous ETrF (same as `crop coefficient', and equal to instantaneous ET divided by instantaneous reference ET) is equal to the 24 hour averaged value, will be evaluated for the study area. Seasonal ET will be estimated by expanding the 24-hour ET proportionally to a reference ET that is derived from weather data. References: Bastiaanssen,W.G.M., M.Menenti, R.A. Feddes, and A.A.M. Holtslag, 1998, A remote sensing surface energy balance algorithm for land (SEBAL): 1
NASA Astrophysics Data System (ADS)
Chen, M.; Senay, G. B.; Verdin, J. P.; Rowland, J.
2014-12-01
Current regional to global and daily to annual Evapotranspiration ( ET) estimation mainly relies on surface energy balance (SEB) ET models or statistical empirical methods driven by remote sensing data and various meteorology databases. However, these ET models face challenging issues—large uncertainties from inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at globally available FLUXNET tower sites provide a feasible opportunity to assess the ET modelling uncertainties. In this study, we focused on uncertainty analysis on an operational simplified surface energy balance (SSEBop) algorithm for ET estimation at multiple Ameriflux tower sites with diverse land cover characteristics and climatic conditions. The input land surface temperature (LST) data of the algorithm were adopted from the 8-day composite1-km Moderate Resolution Imaging Spectroradiometer (MODIS) land surface temperature. The other input data were taken from the Ameriflux database. Results of statistical analysis indicated that uncertainties or random errors from input variables and parameters of SSEBop led to daily and seasonal ET estimates with relative errors around 20% across multiple flux tower sites distributed across different biomes. This uncertainty of SSEBop lies in the error range of 20-30% of similar SEB-based ET algorithms, such as, Surface Energy Balance System and Surface Energy Balance Algorithm for Land. The R2 between daily and seasonal ET estimates by SSEBop and ET eddy covariance measurements at multiple Ameriflux tower sites exceed 0.7, and even up to 0.9 for croplands, grasslands, and forests, suggesting systematic error or bias of the SSEBop is acceptable. In summary, the uncertainty assessment verifies that the SSEBop is a reliable method for wide-area ET calculation and especially useful for detecting drought years and relative drought severity for agricultural production
Remote sensing based energy balance algorithms for mapping ET: Current status and future challenges
Technology Transfer Automated Retrieval System (TEKTRAN)
Evapotranspiration (ET) is an essential component of the water balance and a major consumptive use of irrigation water and precipitation on crop land. Remote sensing based agrometeorological models are presently most suited for estimating crop water use at both field and regional scales. Numerous ET...
Algorithm for Bottom Charge based on Load-duration Curve of Plug-in Hybrid Electric Vehicles
NASA Astrophysics Data System (ADS)
Takagi, Masaaki; Iwafune, Yumiko; Yamamoto, Hiromi; Yamaji, Kenji; Okano, Kunihiko; Hiwatari, Ryouji; Ikeya, Tomohiko
In the transport sector, Plug-in Hybrid Electric Vehicle (PHEV) is being developed as an environmentally friendly vehicle. PHEV is a kind of hybrid electric vehicle, which can be charged from power grid. Therefore, when analyzing reduction effect of CO2 emission by PHEVs, we need to count the emission from the power sector. In addition, the emission from the power sector is greatly influenced by charge pattern, i.e. timing of charge. For example, we can realize the load leveling by bottom charge, which charges late at night. If nuclear power plants were introduced by load leveling, we could expect substantial CO2 reduction. This study proposes an algorithm for bottom charge based on load-duration curve of charging. By adjusting the amplitude of charging power, we can bring the shape of curve close to that of ideal bottom charge. We evaluated the algorithm by using optimal generation planning model. The evaluation index is a difference between Target case, in which PHEVs ideally charge to raise the bottom demand, and Proposal case, in which PHEVs charge using the proposed algorithm. Annual CO2 emissions of Target case and Proposal case are 20.0% and 17.5% less than that of Reference case. Percentage of the reduction effect of Proposal case to that of Target case results in 87.5%. These results show that the proposed algorithm is effective in bottom-up of daily load curve.
NASA Astrophysics Data System (ADS)
Dowdell, David C.; Matthews, G. Peter; Wells, Ian
Two globally averaged mass balance models have been developed to investigate the sensitivity and future level of atmospheric chlorine and bromine as a result of the emission of 14 chloro- and 3 bromo-carbons. The models use production, growth, lifetime and concentration data for each of the halocarbons and divide the production into one of eight uses, these being aerosol propellants, cleaning agents, blowing agents in open and closed cell foams, non-hermetic and hermetic refrigeration, fire retardants and a residual "other" category. Each use category has an associated emission profile which is built into the models to take into account the proportion of halocarbon retained in equipment for a characteristic period of time before its release. Under the Montreal Protocol 3 requirements, a peak chlorine loading of 3.8 ppb is attained in 1994, which does not reduce to 2.0 ppb (the approximate level of atmospheric chlorine when the ozone hole formed) until 2053. The peak bromine loading is 22 ppt, also in 1994, which decays to 12 ppt by the end of next century. The models have been used to (i) compare the effectiveness of Montreal Protocols 1, 2 and 3 in removing chlorine from the atmosphere, (ii) assess the influence of the delayed emission assumptions used in these models compared to immediate emission assumptions used in previous models, (iii) assess the relative effect on the chlorine loading of a tightening of the Montreal Protocol 3 restrictions, and (iv) calculate the influence of chlorine and bromine chemistry as well as the faster phase out of man-made methyl bromide on the bromine loading.
Increasing the precision and accuracy of top-loading balances: application of experimental design.
Bzik, T J; Henderson, P B; Hobbs, J P
1998-01-01
The traditional method of estimating the weight of multiple objects is to obtain the weight of each object individually. We demonstrate that the precision and accuracy of these estimates can be improved by using a weighing scheme in which multiple objects are simultaneously on the balance. The resulting system of linear equations is solved to yield the weight estimates for the objects. Precision and accuracy improvements can be made by using a weighing scheme without requiring any more weighings than the number of objects when a total of at least six objects are to be weighed. It is also necessary that multiple objects can be weighed with about the same precision as that obtained with a single object, and the scale bias remains relatively constant over the set of weighings. Simulated and empirical examples are given for a system of eight objects in which up to five objects can be weighed simultaneously. A modified Plackett-Burman weighing scheme yields a 25% improvement in precision over the traditional method and implicitly removes the scale bias from seven of the eight objects. Applications of this novel use of experimental design techniques are shown to have potential commercial importance for quality control methods that rely on the mass change rate of an object. PMID:21644600
NASA Technical Reports Server (NTRS)
Brooks, D. R.; Harrison, E. F.; Minnis, P.; Suttles, J. T.; Kandel, R. S.
1986-01-01
A brief description is given of how temporal and spatial variability in the earth's radiative behavior influences the goals of satellite radiation monitoring systems and how some previous systems have addressed the existing problems. Then, results of some simulations of radiation budget monitoring missions are presented. These studies led to the design of the Earth Radiation Budget Experiment (ERBE). A description is given of the temporal and spatial averaging algorithms developed for the ERBE data analysis. These algorithms are intended primarily to produce monthly averages of the net radiant exitance on regional, zonal, and global scales and to provide insight into the regional diurnal variability of radiative parameters such as albedo and long-wave radiant exitance. The algorithms are applied to scanner and nonscanner data for up to three satellites. Modeling of dialy shortwave albedo and radiant exitance with satellite samling that is insufficient to fully account for changing meteorology is discussed in detail. Studies performed during the ERBE mission and software design are reviewed. These studies provide quantitative estimates of the effects of temporally sparse and biased sampling on inferred diurnal and regional radiative parameters. Other topics covered include long-wave diurnal modeling, extraction of a regional monthly net clear-sky radiation budget, the statistical significance of observed diurnal variability, quality control of the analysis, and proposals for validating the results of ERBE time and space averaging.
NASA Astrophysics Data System (ADS)
Bhattarai, Nishan
The flow of water and energy fluxes at the Earth's surface and within the climate system is difficult to quantify. Recent advances in remote sensing technologies have provided scientists with a useful means to improve characterization of these complex processes. However, many challenges remain that limit our ability to optimize remote sensing data in determining evapotranspiration (ET) and energy fluxes. For example, periodic cloud cover limits the operational use of remotely sensed data from passive sensors in monitoring seasonal fluxes. Additionally, there are many remote sensing-based single-source surface energy balance (SEB) models, but no clear guidance on which one to use in a particular application. Two widely used models---surface energy balance algorithm for land (SEBAL) and mapping ET at high resolution with internalized calibration (METRIC)---need substantial human-intervention that limits their applicability in broad-scale studies. This dissertation addressed some of these challenges by proposing novel ways to optimize available resources within the SEB-based ET modeling framework. A simple regression-based Landsat-Moderate Resolution Imaging Spectroradiometer (MODIS) fusion model was developed to integrate Landsat spatial and MODIS temporal characteristics in calculating ET. The fusion model produced reliable estimates of seasonal ET at moderate spatial resolution while mitigating the impact that cloud cover can have on image availability. The dissertation also evaluated five commonly used remote sensing-based single-source SEB models and found the surface energy balance system (SEBS) may be the best overall model for use in humid subtropical climates. The study also determined that model accuracy varies with land cover type, for example, all models worked well for wet marsh conditions, but the SEBAL and simplified surface energy balance index (S-SEBI) models worked better than the alternatives for grass cover. A new automated approach based on
Lebel, R Marc; Menon, Ravi S; Bowen, Chris V
2006-03-01
Magnetic resonance microscopy using magnetically labeled cells is an emerging discipline offering the potential for non-destructive studies targeting numerous cellular events in medical research. The present work develops a technique to quantify superparamagnetic iron-oxide (SPIO) loaded cells using fully balanced steady state free precession (b-SSFP) imaging. An analytic model based on phase cancellation was derived for a single particle and extended to predict mono-exponential decay versus echo time in the presence of multiple randomly distributed particles. Numerical models verified phase incoherence as the dominant contrast mechanism and evaluated the model using a full range of tissue decay rates, repetition times, and flip angles. Numerical simulations indicated a relaxation rate enhancement (DeltaR(2b)=0.412 gamma . LMD) proportional to LMD, the local magnetic dose (the additional sample magnetization due to the SPIO particles), a quantity related to the concentration of contrast agent. A phantom model of SPIO loaded cells showed excellent agreement with simulations, demonstrated comparable sensitivity to gradient echo DeltaR(*) (2) enhancements, and 14 times the sensitivity of spin echo DeltaR(2) measurements. We believe this model can be used to facilitate the generation of quantitative maps of targeted cell populations. PMID:16450353
Effects of nutrient loading on the carbon balance of coastal wetland sediments
Morris, J.T.; Bradley, P.M.
1999-01-01
Results of a 12-yr study in an oligotrophic South Carolina salt marsh demonstrate that soil respiration increased by 795 g C m-2 yr-1 and that carbon inventories decreased in sediments fertilized with nitrogen and phosphorus. Fertilized plots became net sources of carbon to the atmosphere, and sediment respiration continues in these plots at an accelerated pace. After 12 yr of treatment, soil macroorganic matter in the top 5 cm of sediment was 475 g C m-2 lower in fertilized plots than in controls, which is equivalent to a constant loss rate of 40 g C m-2 yr-1. It is not known whether soil carbon in fertilized plots has reached a new equilibrium or continues to decline. The increase in soil respiration in the fertilized plots was far greater than the loss of sediment organic matter, which indicates that the increase in soil respiration was largely due to an increase in primary production. Sediment respiration in laboratory incubations also demonstrated positive effects of nutrients. Thus, the results indicate that increased nutrient loading of oligotrophic wetlands can lead to an increased rate of sediment carbon turnover and a net loss of carbon from sediments.
Francois, Marianne M; Carlson, Neil N
2010-01-01
Understanding the complex interaction of droplet dynamics with mass transfer and chemical reactions is of fundamental importance in liquid-liquid extraction. High-fidelity numerical simulation of droplet dynamics with interfacial mass transfer is particularly challenging because the position of the interface between the fluids and the interface physics need to be predicted as part of the solution of the flow equations. In addition, the discontinuity in fluid density, viscosity and species concentration at the interface present additional numerical challenges. In this work, we extend our balanced-force volume-tracking algorithm for modeling surface tension force (Francois et al., 2006) and we propose a global embedded interface formulation to model the interfacial conditions of an interface in thermodynamic equilibrium. To validate our formulation, we perform simulations of pure diffusion problems in one- and two-dimensions. Then we present two and three-dimensional simulations of a single droplet dynamics rising by buoyancy with mass transfer.
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N. B.
2010-07-01
Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial scales. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at varying temporal and spatial scales under complex terrain. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can fully account for the dynamic impacts of complex terrain and changing land cover in concert with some varying kinetic parameters (i.e., roughness and zero-plane displacement) over time. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N.-B.
2011-01-01
Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial coverage in the study areas. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at different temporal and spatial scales under heterogeneous terrain with varying elevations, slopes and aspects. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can account for the dynamic impacts of heterogeneous terrain and changing land cover with some varying kinetic parameters (i.e., roughness and zero-plane displacement). Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Vechiato, F M V; Rivas, P M S; Ruginsk, S G; Borges, B C; Elias, L L K; Antunes-Rodrigues, J
2016-02-01
Hydroelectrolytic imbalances, such as saline load (SL), trigger behavioral and neuroendocrine responses, such as thirst, hypophagia, vasopressin (AVP) and oxytocin (OT) release and hypothalamus–pituitary–adrenal (HPA) axis activation. To investigate the participation of the type-1 cannabinoid receptor (CB1R) in these homeostatic mechanisms,male adult Wistar rats were subjected to SL (0.3MNaCl) for four days. SL induced not only increases in the water intake and plasma levels of AVP, OT and corticosterone, as previously described, but also increases in CB1R expression in the lamina terminalis, which integrates sensory afferents, aswell as in the hypothalamus, the main integrative and effector area controlling hydroelectrolytic homeostasis. A more detailed analysis revealed that CB1R-positive terminals are in close apposition with not only axons but also dendrites and secretory granules of magnocellular neurons, particularly vasopressinergic cells. In satiated and euhydrated animals, the intracerebroventricular administration of the CB1R selective agonist ACEA (0.1 μg/5 μL) promoted hyperphagia, but this treatment did not reverse the hyperosmolality-induced hypophagia in the SL group. Furthermore, ACEA pretreatment potentiated water intake in the SL animals during rehydration as well as enhanced the corticosterone release and prevented the increase in AVP and OT secretion induced by SL. The same parameters were not changed by ACEA in the animals whose daily food intake was matched to that of the SL group (Pair-Fed). These data indicate that CB1Rs modulate the hydroelectrolytic balance independently of the food intake during sustained hyperosmolality and hypovolemia. PMID:26497248
NASA Astrophysics Data System (ADS)
Tsuzuki, Satori; Aoki, Takayuki
2016-04-01
Numerical simulations for debris flows including a countless of objects is one of important topics in fluid dynamics and many engineering applications. Particle-based method is a promising approach to carry out the simulations for flows interacting with objects. In this paper, we propose an efficient method to realize a large-scale simulation for fluid-structure interaction by combining SPH (Smoothed Particle Hydrodynamics) method for fluid with DEM (Discrete Element Method) for objects on a multi-GPU system. By applying space filling curves to decomposition of the computational domain, we are able to contain the same number of particles in each decomposed domain. In our implementation, several techniques for particle counting and data movement have been introduced. Fragmentation of the memory used for particles happens during the time-integration and the frequency of de-fragmentation is examined by taking account for computational load balance and the communication cost between CPU and GPU. A link-list technique of the particle interaction is introduced to save the memory drastically. It is found that the sorting of particle data for the neighboring particle list using linked-list method improves the memory access greatly with a certain interval. The weak and strong scalabilities for a SPH simulation using 111 Million particles was measured from 4 GPUs to 512 GPUs for three types of space filling curves. A large-scale debris flow simulation of tsunami with 10,368 floating rubbles using 117 Million particles were successfully carried out with 256 GPUs on the TSUBAME 2.5 supercomputer at Tokyo Institute of Technology.
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.
1979-01-01
The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.
NASA Astrophysics Data System (ADS)
Rainieri, Carlo; Fabbrocino, Giovanni
2015-08-01
In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous
ERIC Educational Resources Information Center
Ngu, Bing Hiong; Phan, Huy Phuong
2016-01-01
We examined the use of balance and inverse methods in equation solving. The main difference between the balance and inverse methods lies in the operational line (e.g. +2 on both sides vs -2 becomes +2). Differential element interactivity favours the inverse method because the interaction between elements occurs on both sides of the equation for…
Technology Transfer Automated Retrieval System (TEKTRAN)
An intercomparison of output from two models estimating spatially distributed surface energy fluxes from remotely sensed imagery is conducted. A major difference between the two models is whether the soil and vegetation components of the scene are treated separately (Two-Source Energy Balance; TSEB ...
Robust PD Sway Control of a Lifted Load for a Crane Using a Genetic Algorithm
NASA Astrophysics Data System (ADS)
Kawada, Kazuo; Sogo, Hiroyuki; Yamamoto, Toru; Mada, Yasuhiro
PID control schemes still continue to be widely used for most industrial control systems. This is mainly because PID controllers have simple control structures, and are simple to maintain and tune. However, it is difficult to find a set of suitable control parameters in the case of time-varying and/or nonlinear systems. For such a problem, the robust controller has been proposed.Although it is important to choose the suitable nominal model in designing the robust controller, it is not usually easy.In this paper, a new robust PD controller design scheme is proposed, which utilizes a genetic algorithm.
Ariyawansa, K.A.; Hudson, D.D.
1989-01-01
We describe a benchmark parallel version of the Van Slyke and Wets algorithm for two-stage stochastic programs and an implementation of that algorithm on the Sequent/Balance. We also report results of a numerical experiment using random test problems and our implementation. These performance results, to the best of our knowledge, are the first available for the Van Slyke and Wets algorithm on a parallel processor. They indicate that the benchmark implementation parallelizes well, and that even with the use of parallel processing, problems with random variables having large numbers of realizations can take prohibitively large amounts of computation for solution. Thus, they demonstrate the need for exploiting both parallelization and approximation for the solution of stochastic programs. 15 refs., 18 tabs.
NASA Astrophysics Data System (ADS)
Manodham, Thavisak; Loyola, Luis; Miki, Tetsuya
IEEE 802.11 wirelesses LANs (WLANs) have been rapidly deployed in enterprises, public areas, and households. Voice-over-IP (VoIP) and similar applications are now commonly used in mobile devices over wireless networks. Recent works have improved the quality of service (QoS) offering higher data rates to support various kinds of real-time applications. However, besides the need for higher data rates, seamless handoff and load balancing among APs are key issues that must be addressed in order to continue supporting real-time services across wireless LANs and providing fair services to all users. In this paper, we introduce a novel access point (AP) with two transceivers that improves network efficiency by supporting seamless handoff and traffic load balancing in a wireless network. In our proposed scheme, the novel AP uses the second transceiver to scan and find neighboring STAs in the transmission range and then sends the results to neighboring APs, which compare and analyze whether or not the STA should perform a handoff. The initial results from our simulations show that the novel AP module is more effective than the conventional scheme and a related work in terms of providing a handoff process with low latency and sharing traffic load with neighbor APs.
NASA Astrophysics Data System (ADS)
Gharehbaghi, Sadjad; Khatibinia, Mohsen
2015-03-01
A reliable seismic-resistant design of structures is achieved in accordance with the seismic design codes by designing structures under seven or more pairs of earthquake records. Based on the recommendations of seismic design codes, the average time-history responses (ATHR) of structure is required. This paper focuses on the optimal seismic design of reinforced concrete (RC) structures against ten earthquake records using a hybrid of particle swarm optimization algorithm and an intelligent regression model (IRM). In order to reduce the computational time of optimization procedure due to the computational efforts of time-history analyses, IRM is proposed to accurately predict ATHR of structures. The proposed IRM consists of the combination of the subtractive algorithm (SA), K-means clustering approach and wavelet weighted least squares support vector machine (WWLS-SVM). To predict ATHR of structures, first, the input-output samples of structures are classified by SA and K-means clustering approach. Then, WWLS-SVM is trained with few samples and high accuracy for each cluster. 9- and 18-storey RC frames are designed optimally to illustrate the effectiveness and practicality of the proposed IRM. The numerical results demonstrate the efficiency and computational advantages of IRM for optimal design of structures subjected to time-history earthquake loads.
Saito, Masatoshi
2010-08-15
Purpose: This article describes the spectral optimization of dual-energy computed tomography using balanced filters (bf-DECT) to reduce the tube loadings and dose by dedicating to the acquisition of electron density information, which is essential for treatment planning in radiotherapy. Methods: For the spectral optimization of bf-DECT, the author calculated the beam-hardening error and air kerma required to achieve a desired noise level in an electron density image of a 50-cm-diameter cylindrical water phantom. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. Results: The optimal combination of tube voltages was 80 kV/140 kV in conjunction with Tb/Hf and Bi/Mo filter pairs; this combination agrees with that obtained in a previous study [M. Saito, ''Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method,'' Med. Phys. 36, 3631-3642 (2009)], although the thicknesses of the filters that yielded a minimum tube output were slightly different from those obtained in the previous study. The resultant tube loading of a low-energy scan of the present bf-DECT significantly decreased from 57.5 to 4.5 times that of a high-energy scan for conventional DECT. Furthermore, the air kerma of bf-DECT could be reduced to less than that of conventional DECT, while obtaining the same figure of merit for the measurement of electron density and effective atomic number. Conclusions: The tube-loading and dose efficiencies of bf-DECT were considerably improved by sacrificing the quality of the noise level in the images of effective atomic number.
Karpievitch, Yuliya V; Almeida, Jonas S
2006-01-01
Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it
NASA Astrophysics Data System (ADS)
Biswas, Papun; Chakraborti, Debjani
2010-10-01
This paper describes how the genetic algorithms (GAs) can be efficiently used to fuzzy goal programming (FGP) formulation of optimal power flow problems having multiple objectives. In the proposed approach, the different constraints, various relationships of optimal power flow calculations are fuzzily described. In the model formulation of the problem, the membership functions of the defined fuzzy goals are characterized first for measuring the degree of achievement of the aspiration levels of the goals specified in the decision making context. Then, the achievement function for minimizing the regret for under-deviations from the highest membership value (unity) of the defined membership goals to the extent possible on the basis of priorities is constructed for optimal power flow problems. In the solution process, the GA method is employed to the FGP formulation of the problem for achievement of the highest membership value (unity) of the defined membership functions to the extent possible in the decision making environment. In the GA based solution search process, the conventional Roulette wheel selection scheme, arithmetic crossover and random mutation are taken into consideration to reach a satisfactory decision. The developed method has been tested on IEEE 6-generator 30-bus System. Numerical results show that this method is promising for handling uncertain constraints in practical power systems.
Lu, Shi Jing; Salleh, Abdul Hakim Mohamed; Mohamad, Mohd Saberi; Deris, Safaai; Omatu, Sigeru; Yoshioka, Michifumi
2014-09-28
Reconstructions of genome-scale metabolic networks from different organisms have become popular in recent years. Metabolic engineering can simulate the reconstruction process to obtain desirable phenotypes. In previous studies, optimization algorithms have been implemented to identify the near-optimal sets of knockout genes for improving metabolite production. However, previous works contained premature convergence and the stop criteria were not clear for each case. Therefore, this study proposes an algorithm that is a hybrid of the ant colony optimization algorithm and flux balance analysis (ACOFBA) to predict near optimal sets of gene knockouts in an effort to maximize growth rates and the production of certain metabolites. Here, we present a case study that uses Baker's yeast, also known as Saccharomyces cerevisiae, as the model organism and target the rate of vanillin production for optimization. The results of this study are the growth rate of the model organism after gene deletion and a list of knockout genes. The ACOFBA algorithm was found to improve the yield of vanillin in terms of growth rate and production compared with the previous algorithms. PMID:25462325
Optimal Load Control via Frequency Measurement and Neighborhood Area Communication
Zhao, CH; Topcu, U; Low, SH
2013-11-01
We propose a decentralized optimal load control scheme that provides contingency reserve in the presence of sudden generation drop. The scheme takes advantage of flexibility of frequency responsive loads and neighborhood area communication to solve an optimal load control problem that balances load and generation while minimizing end-use disutility of participating in load control. Local frequency measurements enable individual loads to estimate the total mismatch between load and generation. Neighborhood area communication helps mitigate effects of inconsistencies in the local estimates due to frequency measurement noise. Case studies show that the proposed scheme can balance load with generation and restore the frequency within seconds of time after a generation drop, even when the loads use a highly simplified power system model in their algorithms. We also investigate tradeoffs between the amount of communication and the performance of the proposed scheme through simulation-based experiments.
Li, Bai; Chiong, Raymond; Lin, Mu
2015-02-01
Protein structure prediction is a fundamental issue in the field of computational molecular biology. In this paper, the AB off-lattice model is adopted to transform the original protein structure prediction scheme into a numerical optimization problem. We present a balance-evolution artificial bee colony (BE-ABC) algorithm to address the problem, with the aim of finding the structure for a given protein sequence with the minimal free-energy value. This is achieved through the use of convergence information during the optimization process to adaptively manipulate the search intensity. Besides that, an overall degradation procedure is introduced as part of the BE-ABC algorithm to prevent premature convergence. Comprehensive simulation experiments based on the well-known artificial Fibonacci sequence set and several real sequences from the database of Protein Data Bank have been carried out to compare the performance of BE-ABC against other algorithms. Our numerical results show that the BE-ABC algorithm is able to outperform many state-of-the-art approaches and can be effectively employed for protein structure optimization. PMID:25463349
Kim, Mijin; Hyun, Seunghun; Kwon, Jung-Hwan
2015-10-01
The accumulation of marine plastic debris is one of the main emerging environmental issues of the twenty first century. Numerous studies in recent decades have reported the level of plastic particles on the beaches and in oceans worldwide. However, it is still unclear how much plastic debris remains in the marine environment because the sampling methods for identifying and quantifying plastics from the environment have not been standardized; moreover, the methods are not guaranteed to find all of the plastics that do remain. The level of identified marine plastic debris may explain only the small portion of remaining plastics. To perform a quantitative estimation of remaining plastics, a mass balance analysis was performed for high- and low-density PE within the borders of South Korea during 1995-2012. Disposal methods such as incineration, land disposal, and recycling accounted for only approximately 40 % of PE use, whereas 60 % remained unaccounted for. The total unaccounted mass of high- and low-density PE to the marine environment during the evaluation period was 28 million tons. The corresponding contribution to marine plastic debris would be approximately 25,000 tons and 70 g km(-2) of the world oceans assuming that the fraction entering the marine environment is 0.001 and that the degradation half-life is 50 years in seawater. Because the observed concentrations of plastics worldwide were much lower than the range expected by extrapolation from this mass balance study, it is considered that there probably is still a huge mass of unidentified plastic debris. Further research is therefore needed to fill this gap between the mass balance approximation and the identified marine plastics including a better estimation of the mass flux to the marine environment. PMID:26153107
NASA Astrophysics Data System (ADS)
Bhadha, J. H.; Jawitz, J. W.; Min, J.
2009-12-01
Internal loading is a critical component of the phosphorus (P) budget of aquatic systems, and can control the trophic conditions. While diffusion is generally considered the dominant process controlling internal P load to the water column, advection due to water table fluctuations resulting from episodic flooding and drying cycles can be a significant component of the P budget of depressional wetlands. Within the drainage basin of Lake Okeechobee, Florida, P is exported annually to the lake from impacted isolated wetlands located on beef farming facilities via ditches and canals. The objective of this study was to evaluate the role of diffusive and advective fluxes in relation to the total P loads entering and exiting one of these isolated wetlands. Diffusive fluxes were calculated from depth-variable pore water concentrations measured using multilevel samplers and pore water equilibrators. Advective fluxes were estimated based on groundwater fluctuations calculated within a hydrologic-budget framework. Results from an eleven-month monitoring period (May 2005-March 2006) indicated that the diffusive flux of soluble reactive P (SRP) was 0.42 ± 0.24 mg m-2 d-1 and occurred for 230 days out of 335. In comparison, the advective flux occurred over a shorter duration of just 21 days, yet generated a greater flux controlled by the concentrations of shallow pore water and the velocity of the ground water moving upwards into the wetland water column. The highest advective flux of SRP was estimated at 27.4 mg m-2 d-1. Based on these fluxes the corresponding P load to the wetland via internal modes was estimated at 5.2 kg and 0.93 kg from diffusion and advection respectively, representing a significant fraction of the total P load entering the wetland water column. Plant colonization during dry periods in P enriched soils is also a significant mechanism for P release from the soil at the time of flooding, however, this component to the wetland P budget was not evaluated as
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-07-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
Parallel algorithms for dynamically partitioning unstructured grids
Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.
1994-10-01
Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.
Swank, M; Romanowski, J R; Korbee, L L; Bignozzi, S
2007-10-01
Total knee arthroplasty (TKA) remains one of the most successful procedures in orthopaedic surgery. Complications certainly exist and are often related to failure of knee ligament balance. This asymmetry subsequently leads to component mal-alignment and loosening often secondary to deviation of the lower extremity mechanical axis. Understanding knee mechanics is essential, and recent technological advances have begun to minimize postoperative problems. A tensioning device that respects the native patellofemoral anatomy as well as the natural ligamentous strains has been developed. The surgical integration of computer-assisted navigation has allowed for enhanced accuracy and subsequently better results. The purpose of the current paper is to discuss the evolution of an improved ligament tensioning device, in the setting of classic mechanical guidance versus computer assistance and its postoperative impact on total knee outcomes in terms of manipulation rates and two-year radiographic alignment data. Based on a single surgeon series, mechanically guided arthroplasties resulted in a 16 per cent manipulation rate. Computer assistance with spacer blocks decreased the manipulation rate to 14 per cent, while using a novel tensioner device further decreased the manipulation rate to 7 per cent, a significant difference of p < 0.01. Radiographic data illustrate all TKAs with the tensioner to be within 4 degrees of the desired position. PMID:18019462
Massaro, Luciana; Massa, Fabrizio; Simpson, Kathy; Fragaszy, Dorothy; Visalberghi, Elisabetta
2016-04-01
The ability to carry objects has been considered an important selective pressure favoring the evolution of bipedal locomotion in early hominins. Comparable behaviors by extant primates have been studied very little, as few primates habitually carry objects bipedally. However, wild bearded capuchins living at Fazenda Boa Vista spontaneously and habitually transport stone tools by walking bipedally, allowing us to examine the characteristics of bipedal locomotion during object transport by a generalized primate. In this pilot study, we investigated the mechanical aspects of position and velocity of the center of mass, trunk inclination, and forelimb postures, and the torque of the forces applied on each anatomical segment in wild bearded capuchin monkeys during the transport of objects, with particular attention to the tail and its role in balancing the body. Our results indicate that body mass strongly affects the posture of transport and that capuchins are able to carry heavy loads bipedally with a bent-hip-bent-knee posture, thanks to the "strategic" use of their extendable tail; in fact, without this anatomical structure, constituting only 5 % of their body mass, they would be unable to transport the loads that they habitually carry. PMID:26733456
Research on Routing Selection Algorithm Based on Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna
The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Bader, Jon B.
2009-01-01
Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of watershed-scale nutrient load estimates...
A Cross Unequal Clustering Routing Algorithm for Sensor Network
NASA Astrophysics Data System (ADS)
Tong, Wang; Jiyi, Wu; He, Xu; Jinghua, Zhu; Munyabugingo, Charles
2013-08-01
In the routing protocol for wireless sensor network, the cluster size is generally fixed in clustering routing algorithm for wireless sensor network, which can easily lead to the "hot spot" problem. Furthermore, the majority of routing algorithms barely consider the problem of long distance communication between adjacent cluster heads that brings high energy consumption. Therefore, this paper proposes a new cross unequal clustering routing algorithm based on the EEUC algorithm. In order to solve the defects of EEUC algorithm, this algorithm calculating of competition radius takes the node's position and node's remaining energy into account to make the load of cluster heads more balanced. At the same time, cluster adjacent node is applied to transport data and reduce the energy-loss of cluster heads. Simulation experiments show that, compared with LEACH and EEUC, the proposed algorithm can effectively reduce the energy-loss of cluster heads and balance the energy consumption among all nodes in the network and improve the network lifetime
A self-adaptive parameter optimization algorithm in a real-time parallel image processing system.
Li, Ge; Zhang, Xuehe; Zhao, Jie; Zhang, Hongli; Ye, Jianwei; Zhang, Weizhe
2013-01-01
Aiming at the stalemate that precision, speed, robustness, and other parameters constrain each other in the parallel processed vision servo system, this paper proposed an adaptive load capacity balance strategy on the servo parameters optimization algorithm (ALBPO) to improve the computing precision and to achieve high detection ratio while not reducing the servo circle. We use load capacity functions (LC) to estimate the load for each processor and then make continuous self-adaptation towards a balanced status based on the fluctuated LC results; meanwhile, we pick up a proper set of target detection and location parameters according to the results of LC. Compared with current load balance algorithm, the algorithm proposed in this paper is proceeded under an unknown informed status about the maximum load and the current load of the processors, which means it has great extensibility. Simulation results showed that the ALBPO algorithm has great merits on load balance performance, realizing the optimization of QoS for each processor, fulfilling the balance requirements of servo circle, precision, and robustness of the parallel processed vision servo system. PMID:24174920
Rocha-Gutiérrez, Beatriz A; Lee, Wen-Yee; Shane Walker, W
2016-01-01
A mass loading and mass balance analysis was performed on selected polybromodiphenyl ethers (PBDEs) in the first full-scale indirect potable reuse treatment plant in the United States. Chemical analysis of PBDEs was performed using an environmentally friendly sample preparation technique, called stir-bar sorptive extraction (SBSE), coupled with thermal desorption and gas chromatography/mass spectrometry (GC/MS). The three most dominant PBDEs found in all the samples were: BDE-47, BDE-99 and BDE-100. In the wastewater influent, the concentrations of studied PBDEs ranged from 94 to 775 ng/L, and in the effluent, the levels were below the detection limit. Concentrations in sludge ranged from 50 to 182 ng/g. In general, a removal efficiency of 92-96% of the PBDEs in the plant was accomplished through primary and secondary processes. The tertiary treatment process was able to effectively reduce the aforementioned PBDEs to less than 10 ng/L (>96% removal efficiency) in the effluent. If PBDEs remain in the treated wastewater effluent, they may pose environmental and health impacts through aquifer recharge, irrigation, and sludge final disposal. PMID:26819385
Parallel contact detection algorithm for transient solid dynamics simulations using PRONTO3D
Attaway, S.W.; Hendrickson, B.A.; Plimpton, S.J.
1996-09-01
An efficient, scalable, parallel algorithm for treating material surface contacts in solid mechanics finite element programs has been implemented in a modular way for MIMD parallel computers. The serial contact detection algorithm that was developed previously for the transient dynamics finite element code PRONTO3D has been extended for use in parallel computation by devising a dynamic (adaptive) processor load balancing scheme.
Better Bonded Ethernet Load Balancing
Gabler, Jason
2006-09-29
When a High Performance Storage System's mover shuttles large amounts of data to storage over a single Ethernet device that single channel can rapidly become saturated. Using Linux Ethernet channel bonding to address this and similar situations was not, until now, a viable solution. The various modes in which channel bonding could be configured always offered some benefit but only under strict conditions or at a system resource cost that was greater than the benefit gained by using channel bonding. Newer bonding modes designed by various networking hardware companies, helpful in such networking scenarios, were already present in their own switches. However, Linux-based systems were unable to take advantage of those new modes as they had not yet been implemented in the Linux kernel bonding driver. So, except for basic fault tolerance, Linux channel bonding could not positively combine separate Ethernet devices to provide the necessary bandwidth.
The Impact of Solar Photovoltaic Generation on Balancing Requirements in the Southern Nevada System
Ma, Jian; Lu, Shuai; Hafen, Ryan P.; Etingov, Pavel V.; Makarov, Yuri V.; Chadliev, Vladimir
2012-05-07
Abstract—The impact of integrating large-scale solar photovoltaic (PV) generation on the balancing requirements in terms of regulation and load-following requirements in the southern Nevada balancing area is evaluated. The “swinging door” algorithm and the “probability box” method developed by Pacific Northwest National Laboratory (PNNL) were used to quantify the impact of large PV generation on the balancing requirements of the system operations. The system’s actual scheduling, real-time dispatch and regulation processes were simulated. Different levels of distributed generation were also considered in the study. The impact of hourly solar PV generation forecast errors on regulation and load-following requirements was assessed. The sensitivity of balancing requirements with respect to real-time forecast errors of large PV generation was analyzed. Index Terms—Ancillary services, balancing requirements, load following, regulation, renewables integration, swinging door
Choon, Yee Wen; Mohamad, Mohd Saberi; Deris, Safaai; Illias, Rosli Md; Chong, Chuii Khim; Chai, Lian En
2014-03-01
Microbial strain optimization focuses on improving technological properties of the strain of microorganisms. However, the complexities of the metabolic networks, which lead to data ambiguity, often cause genetic modification on the desirable phenotypes difficult to predict. Furthermore, vast number of reactions in cellular metabolism lead to the combinatorial problem in obtaining optimal gene deletion strategy. Consequently, the computation time increases exponentially with the increase in the size of the problem. Hence, we propose an extension of a hybrid of Bees Algorithm and Flux Balance Analysis (BAFBA) by integrating OptKnock into BAFBA to validate the result. This paper presents a number of computational experiments to test on the performance and capability of BAFBA. Escherichia coli, Bacillus subtilis and Clostridium thermocellum are the model organisms in this paper. Also included is the identification of potential reactions to improve the production of succinic acid, lactic acid and ethanol, plus the discussion on the changes in the flux distribution of the predicted mutants. BAFBA shows potential in suggesting the non-intuitive gene knockout strategies and a low variability among the several runs. The results show that BAFBA is suitable, reliable and applicable in predicting optimal gene knockout strategy. PMID:23892659
Gokhale, Sharad; Kohajda, Tibor; Schlink, Uwe
2008-12-15
A number of past studies have shown the prevalence of a considerable amount of volatile organic compounds (VOCs) in workplace, home and outdoor microenvironments. The quantification of an individual's personal exposure to VOCs in each of these microenvironments is an essential task to recognize the health risks. In this paper, such a study of source apportionment of the human exposure to VOCs in homes, offices, and outdoors has been presented. Air samples, analysed for 25 organic compounds and sampled during one week in homes, offices, outdoors and close to persons, at seven locations in the city of Leipzig, have been utilized to recognize the concentration pattern of VOCs using the chemical mass balance (CMB) receptor model. In result, the largest contribution of VOCs to the personal exposure is from homes in the range of 42 to 73%, followed by outdoors, 18 to 34%, and the offices, 2 to 38% with the corresponding concentration ranges of 35 to 80 microg m(- 3), 10 to 45 microg m(- 3) and 1 to 30 microg m(- 3) respectively. The species such as benzene, dodecane, decane, methyl-cyclopentane, triethyltoluene and trichloroethylene dominate outdoors; methyl-cyclohexane, triethyltoluene, nonane, octane, tetraethyltoluene, undecane are highest in the offices; while, from the terpenoid group like 3-carane, limonene, a-pinene, b-pinene and the aromatics toluene and styrene most influence the homes. A genetic algorithm (GA) model has also been applied to carry out the source apportionment. Its results are comparable with that of CMB. PMID:18822447
Kim, H.; Ko, Y.S.; Jung, K.H. )
1992-07-01
In this paper, an expert system is developed to provide a quick and best strategy of load transfer for the power system operator. This load transferring problem is then constrained by the firm and normal capacities of a bank, the fault history of a bank, and the feeder priorities. Heuristic rules which are obtained from a substation operator and both DDC (Distribution Dispatch Center) and DDC (Distribution Control Center) engineers, are incorporated in an expert system to improve the solution procedure. Furthermore, the structural rules based on the bus topology are also generated to reduce the number of switching required to reallocate the load from the busbar connected to the faulted bank to the other sections. This expert system is implemented in Prolog, and the best-first search method is adopted. To solve the combinatorial problem, list processing and recursive programming techniques are used. We also employ the pattern matching mechanism to trace the feeder connectivity.
Automatic force balance calibration system
NASA Technical Reports Server (NTRS)
Ferris, Alice T. (Inventor)
1996-01-01
A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.
Automatic force balance calibration system
NASA Astrophysics Data System (ADS)
Ferris, Alice T.
1995-05-01
A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Balancing Surfaces § 23.425 Gust loads. (a) Each horizontal surface, other than a main wing, must be... for the conditions specified in paragraph (a) of this section, the initial balancing loads for steady... load resulting from the gusts must be added to the initial balancing load to obtain the total load....
Code of Federal Regulations, 2011 CFR
2011-01-01
... Balancing Surfaces § 23.425 Gust loads. (a) Each horizontal surface, other than a main wing, must be... for the conditions specified in paragraph (a) of this section, the initial balancing loads for steady... load resulting from the gusts must be added to the initial balancing load to obtain the total load....
Technology Transfer Automated Retrieval System (TEKTRAN)
Energy balance models use physically based principles to simulate snow cover accumulation and melt. Snobal, a snow cover energy balance model, uses a flux-profile approach to calculating the turbulent flux (sensible and latent heat flux) components of the energy balance. Historically, validation dat...
CAST: Contraction Algorithm for Symmetric Tensors
Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-09-22
Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.
... version of this page please turn Javascript on. Balance Problems About Balance Problems Have you ever felt dizzy, lightheaded, or ... dizziness problem during the past year. Why Good Balance is Important Having good balance means being able ...
McKeever, John W; Reddy, Patel; Jahns, Thomas M
2007-05-01
Surface permanent magnet (SPM) synchronous machines using fractional-slot concentrated windings are being investigated as candidates for high-performance traction machines for automotive electric propulsion systems. It has been shown analytically and experimentally that such designs can achieve very wide constant-power speed ratios (CPSR) [1,2]. This work has shown that machines of this type are capable of achieving very low cogging torque amplitudes as well as significantly increasing the machine power density [3-5] compared to SPM machines using conventional distributed windings. High efficiency can be achieved in this class of SPM machine by making special efforts to suppress the eddy-current losses in the magnets [6-8], accompanied by efforts to minimize the iron losses in the rotor and stator cores. Considerable attention has traditionally been devoted to maximizing the full-load efficiency of traction machines at their rated operating points and along their maximum-power vs. speed envelopes for higher speeds [9,10]. For example, on-line control approaches have been presented for maximizing the full-load efficiency of PM synchronous machines, including the use of negative d-axis stator current to reduce the core losses [11,12]. However, another important performance specification for electric traction applications is the machine's efficiency at partial loads. Partial-load efficiency is particularly important if the target traction application requires long periods of cruising operation at light loads that are significantly lower than the maximum drive capabilities. While the design of the machine itself is clearly important, investigation has shown that this is a case where the choice of the control algorithm plays a critical role in determining the maximum partial-load efficiency that the machine actually achieves in the traction drive system. There is no evidence that this important topic has been addressed for this type of SPM machine by any other authors
Reddy, P.B.; Jahns, T.M.
2007-04-30
Surface permanent magnet (SPM) synchronous machines using fractional-slot concentrated windings are being investigated as candidates for high-performance traction machines for automotive electric propulsion systems. It has been shown analytically and experimentally that such designs can achieve very wide constant-power speed ratios (CPSR) [1,2]. This work has shown that machines of this type are capable of achieving very low cogging torque amplitudes as well as significantly increasing the machine power density [3-5] compared to SPM machines using conventional distributed windings. High efficiency can be achieved in this class of SPM machine by making special efforts to suppress the eddy-current losses in the magnets [6-8], accompanied by efforts to minimize the iron losses in the rotor and stator cores. Considerable attention has traditionally been devoted to maximizing the full-load efficiency of traction machines at their rated operating points and along their maximum-power vs. speed envelopes for higher speeds [9,10]. For example, on-line control approaches have been presented for maximizing the full-load efficiency of PM synchronous machines, including the use of negative d-axis stator current to reduce the core losses [11,12]. However, another important performance specification for electric traction applications is the machine's efficiency at partial loads. Partial-load efficiency is particularly important if the target traction application requires long periods of cruising operation at light loads that are significantly lower than the maximum drive capabilities. While the design of the machine itself is clearly important, investigation has shown that this is a case where the choice of the control algorithm plays a critical role in determining the maximum partial-load efficiency that the machine actually achieves in the traction drive system. There is no evidence that this important topic has been addressed for this type of SPM machine by any other authors
NASA Technical Reports Server (NTRS)
Woods, Claudia M.
1988-01-01
A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed, utilizing a multigrid iterative technique. The code is compared with a presently existing direct solution in terms of computational time and accuracy. The model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobssen-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via liquid striations. The mixed nature of the equations (elliptic in the full film zone and nonelliptic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.
Ghaedi, M; Azad, F Nasiri; Dashtian, K; Hajati, S; Goudarzi, A; Soylak, M
2016-10-01
Maximum malachite green (MG) adsorption onto ZnO Nanorod-loaded activated carbon (ZnO-NR-AC) was achieved following the optimization of conditions, while the mass transfer was accelerated by ultrasonic. The central composite design (CCD) and genetic algorithm (GA) were used to estimate the effect of individual variables and their mutual interactions on the MG adsorption as response and to optimize the adsorption process. The ZnO-NR-AC surface morphology and its properties were identified via FESEM, XRD and FTIR. The adsorption equilibrium isotherm and kinetic models investigation revealed the well fit of the experimental data to Langmuir isotherm and pseudo-second-order kinetic model, respectively. It was shown that a small amount of ZnO-NR-AC (with adsorption capacity of 20mgg(-1)) is sufficient for the rapid removal of high amount of MG dye in short time (3.99min). PMID:27318150
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Irwin, John A.
1979-01-01
A gas turbine engine has an internal drive shaft including one end connected to a driven load and an opposite end connected to a turbine wheel and wherein the shaft has an in situ adjustable balance system near the critical center of a bearing span for the shaft including two 360.degree. rings piloted on the outer diameter of the shaft at a point accessible through an internal engine panel; each of the rings has a small amount of material removed from its periphery whereby both of the rings are precisely unbalanced an equivalent amount; the rings are locked circumferentially together by radial serrations thereon; numbered tangs on the outside diameter of each ring identify the circumferential location of unbalance once the rings are locked together; an aft ring of the pair of rings has a spline on its inside diameter that mates with a like spline on the shaft to lock the entire assembly together.
NASA Astrophysics Data System (ADS)
Tarroja, Brian; Eichman, Joshua D.; Zhang, Li; Brown, Tim M.; Samuelsen, Scott
2015-03-01
A study has been performed that analyzes the effectiveness of utilizing plug-in vehicles to meet holistic environmental goals across the combined electricity and transportation sectors. In this study, plug-in hybrid electric vehicle (PHEV) penetration levels are varied from 0 to 60% and base renewable penetration levels are varied from 10 to 63%. The first part focused on the effect of installing plug-in hybrid electric vehicles on the environmental performance of the combined electricity and transportation sectors. The second part addresses impacts on the design and operation of load-balancing resources on the electric grid associated with fleet capacity factor, peaking and load-following generator capacity, efficiency, ramp rates, start-up events and the levelized cost of electricity. PHEVs using smart charging are found to counteract many of the disruptive impacts of intermittent renewable power on balancing generators for a wide range of renewable penetration levels, only becoming limited at high renewable penetration levels due to lack of flexibility and finite load size. This study highlights synergy between sustainability measures in the electric and transportation sectors and the importance of communicative dispatch of these vehicles.
Comparison of four parallel algorithms for domain decomposed implicit Monte Carlo.
Evans, Thomas M.; Urbatsch, Todd J.; Brunner, Thomas A.; Gentile, Nicholas A.
2005-06-01
We consider four asynchronous parallel algorithms for Implicit Monte Carlo (IMC) thermal radiation transport on spatially decomposed meshes. Two of the algorithms are from the production codes KULL from Lawrence Livermore National Laboratory and Milagro from Los Alamos National Laboratory. Improved versions of each of the existing algorithms are also presented. All algorithms were analyzed in an implementation of the KULL IMC package in ALEGRA, a Sandia National Laboratory high energy density physics code. The improved Milagro algorithm performed the best by scaling almost linearly out to 244 processors for well load balanced problems.
Comparison of four parallel algorithms for domain decomposed implicit Monte Carlo.
Evans, Thomas M. (Los Alamos National Laboratory, Los Alamos, NM); Urbatsch, Todd J. (Los Alamos National Laboratory, Los Alamos, NM); Brunner, Thomas A.; Gentile, Nicholas A. (Lawrence Livermore National Laboratory, Livermore, CA)
2004-12-01
We consider four asynchronous parallel algorithms for Implicit Monte Carlo (IMC) thermal radiation transport on spatially decomposed meshes. Two of the algorithms are from the production codes KULL from Lawrence Livermore National Laboratory and Milagro from Los Alamos National Laboratory. Improved versions of each of the existing algorithms are also presented. All algorithms were analyzed in an implementation of the KULL IMC package in ALEGRA, a Sandia National Laboratory high energy density physics code. The improved Milagro algorithm performed the best by scaling almost linearly out to 244 processors for well load balanced problems.
Comparison of four parallel algorithms for domain decomposed implicit Monte Carlo
Brunner, Thomas A. . E-mail: TABRUNN@sandia.gov; Urbatsch, Todd J.; Evans, Thomas M.; Gentile, Nicholas A.
2006-03-01
We consider four asynchronous parallel algorithms for Implicit Monte Carlo (IMC) thermal radiation transport on spatially decomposed meshes. Two of the algorithms are from the production codes KULL from Lawrence Livermore National Laboratory and Milagro from Los Alamos National Laboratory. Improved versions of each of the existing algorithms are also presented. All algorithms were analyzed in an implementation of the KULL IMC package in ALEGRA, a Sandia National Laboratory high energy density physics code. The improved Milagro algorithm performed the best by scaling almost linearly out to 244 processors for well load balanced problems.
... our e-newsletter! Aging & Health A to Z Balance Problems Basic Facts & Information What are Balance Problems? Having good balance means being able to ... Only then can you “keep your balance.” Why Balance is Important Your feelings of dizziness may last ...
... it could be a sign of a balance problem. Balance problems can make you feel unsteady or as if ... related injuries, such as hip fracture. Some balance problems are due to problems in the inner ear. ...
... often, it could be a sign of a balance problem. Balance problems can make you feel unsteady or as ... fall-related injuries, such as hip fracture. Some balance problems are due to problems in the inner ...
Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2015-01-01
An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.
14 CFR 23.423 - Maneuvering loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Balancing Surfaces § 23.423 Maneuvering loads. Each horizontal surface and its supporting structure, and the...-down pitching conditions is the sum of the balancing loads at V and the specified value of the...
14 CFR 23.423 - Maneuvering loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Balancing Surfaces § 23.423 Maneuvering loads. Each horizontal surface and its supporting structure, and the...-down pitching conditions is the sum of the balancing loads at V and the specified value of the...
Dynamic Congestion Control using MDB-Routing Algorithm
NASA Astrophysics Data System (ADS)
Anuradha, S.; Raghu Ram, G.
2014-01-01
This paper presents high through put routing algorithm. Modified depth Breadth routing algorithm takes a decision in moving forward packet in the next node which will visit to reach its final destination. Load balancing to improve the performance of distributed by processing power of the entire system to smooth out process of very high congestion at individual nodes, by transferring some of the load of heavily loaded nodes to the other nodes for processing. This achieves a 306.53 average time for packet, compared with the DB routing which achieves 316.13 average time for packet. Results shows that the proposed Modified depth Breadth achieves 348 average time when compared to DB routing which gives 548 for 3500 packets on 5 × 5 grid. Further results show that no of dead packets significantly reduced in the case of MDB. This focuses on Routing Network and Tables. These Network tables includes the information used by a routing algorithm to take a decision in moving forward the packet in the next node which will visit to reach its final destination. Load balancing try to improve the performance of a distributed system by processing power of the entire system to smooth out periods of very high congestion at individual nodes, which is done by transferring some of the load of heavily loaded nodes to other nodes for processing.
Technology Transfer Automated Retrieval System (TEKTRAN)
Reliable estimation of the surface energy balance from local to regional scales is crucial for many applications including weather forecasting, hydrologic modeling, irrigation scheduling, water resource management, and climate change research, just to name a few. Numerous models have been developed ...
NASA Technical Reports Server (NTRS)
Simanonok, K. E.; Fortney, S. M.; Ford, S. R.; Charles, J. B.; Ward, D. F.
1994-01-01
Shuttle astronauts currently drink approximately a quart of water with eight salt tablets before reentry to restore lost body fluid and thereby reduce the likelihood of cardiovascular instability and syncope during reentry and after landing. However, the saline loading countermeasure is not entirely effective in restoring orthostatic tolerance to preflight levels. We tested the hypothesis that the effectiveness of this countermeasure could be improved with the use of a vasopressin analog, 1-deamino-8-D-arginine vasopressin (dDAVP). The rationale for this approach is that reducing urine formation with exogenous vasopressin should increase the magnitude and duration of the vascular volume expansion produced by the saline load, and in so doing improve orthostatic tolerance during reentry and postflight.
NASA Astrophysics Data System (ADS)
Magirl, C. S.; Czuba, J. A.; Czuba, C. R.; Curran, C. A.
2012-12-01
Despite heavy sediment loads, large winter floods, and floodplain development, the rivers draining Mount Rainier, a 4,392-m glaciated stratovolcano within 85 km of sea level at Puget Sound, Washington, support important populations of anadromous salmonids, including Chinook salmon and steelhead trout, both listed as threatened under the Endangered Species Act. Aggressive river-management approaches of the early 20th century, such as bank armoring and gravel dredging, are being replaced by more ecologically sensitive approaches including setback levees. However, ongoing aggradation rates of up to 8 cm/yr in lowland reaches present acute challenges for resource managers tasked with ensuring flood protection without deleterious impacts to aquatic ecology. Using historical sediment-load data and a recent reservoir survey of sediment accumulation, rivers draining Mount Rainer were found to carry total sediment yields of 350 to 2,000 tonnes/km2/yr, notably larger than sediment yields of 50 to 200 tonnes/km2/yr typical for other Cascade Range rivers. An estimated 70 to 94% of the total sediment load in lowland reaches originates from the volcano. Looking toward the future, transport-capacity analyses and sediment-transport modeling suggest that large increases in bedload and associated aggradation will result from modest increases in rainfall and runoff that are predicted under future climate conditions. If large sediment loads and associated aggradation continue, creative solutions and long-term management strategies are required to protect people and structures in the floodplain downstream of Mount Rainier while preserving aquatic ecosystems.
ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms
NASA Astrophysics Data System (ADS)
Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François
2015-10-01
Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.
Adaptive load sharing in heterogeneous distributed systems
NASA Astrophysics Data System (ADS)
Mirchandaney, Ravi; Towsley, Don; Stankovic, John A.
1990-08-01
In this paper, we study the performance characteristics of simple load sharing algorithms for heterogeneous distributed systems. We assume that nonnegligible delays are encountered in transferring jobs from one node to another. We analyze the effects of these delays on the performance of two threshold-based algorithms called Forward and Reverse. We formulate queuing theoretic models for each of the algorithms operating in heterogeneous systems under the assumption that the job arrival process at each node in Poisson and the service times and job transfer times are exponentially distributed. The models are solved using the Matrix-Geometric solution technique. These models are used to study the effects of different parameters and algorithm variations on the mean job response time: e.g., the effects of varying the thresholds, the impact of changing the probe limit, the impact of biasing the probing, and the optimal response times over a large range of loads and delays. Wherever relevant, the results of the models are compared with the M/M/ 1 model, representing no load balancing (hereafter referred to as NLB), and the M/M/K model, which is an achievable lower bound (hereafter referred to as LB).
Investigating and Analyzing Applied Loads Higher Than Limit Loads
NASA Technical Reports Server (NTRS)
Karkehabadi, R.; Rhew, R. D.
2004-01-01
The results of the analysis for Balance 1621 indicate that the stresses are high near sharp corners. It is important to increase the size of the fillets to relieve some of the high stresses for the balances that will be designed. For the existing balances, the stresses are high and do not satisfy the established criteria. Two options are considered here. One is a possible modification of the existing balances, and two is to consider other load options. Redesigning a balance can be done in order to enhance the structural integrity of the balance. Because an existing balance needs to be modified, it is not possible to increase the fillet sizes without some further modifications to the balance. It is required that some materials be extracted from the balance in order to have larger fillet sizes. Researchers are interested in being able to apply some components of the load on the balance above the limit loads assigned. Is it possible to enhance the load on the same balance and maintain the factor of safety required? Some loads were increased above their limit loads and analyzed here.
Ghaedi, M; Shojaeipour, E; Ghaedi, A M; Sahraei, Reza
2015-05-01
In this study, copper nanowires loaded on activated carbon (Cu-NWs-AC) was used as novel efficient adsorbent for the removal of malachite green (MG) from aqueous solution. This new material was synthesized through simple protocol and its surface properties such as surface area, pore volume and functional groups were characterized with different techniques such XRD, BET and FESEM analysis. The relation between removal percentages with variables such as solution pH, adsorbent dosage (0.005, 0.01, 0.015, 0.02 and 0.1g), contact time (1-40min) and initial MG concentration (5, 10, 20, 70 and 100mg/L) was investigated and optimized. A three-layer artificial neural network (ANN) model was utilized to predict the malachite green dye removal (%) by Cu-NWs-AC following conduction of 248 experiments. When the training of the ANN was performed, the parameters of ANN model were as follows: linear transfer function (purelin) at output layer, Levenberg-Marquardt algorithm (LMA), and a tangent sigmoid transfer function (tansig) at the hidden layer with 11 neurons. The minimum mean squared error (MSE) of 0.0017 and coefficient of determination (R(2)) of 0.9658 were found for prediction and modeling of dye removal using testing data set. A good agreement between experimental data and predicted data using the ANN model was obtained. Fitting the experimental data on previously optimized condition confirm the suitability of Langmuir isotherm models for their explanation with maximum adsorption capacity of 434.8mg/g at 25°C. Kinetic studies at various adsorbent mass and initial MG concentration show that the MG maximum removal percentage was achieved within 20min. The adsorption of MG follows the pseudo-second-order with a combination of intraparticle diffusion model. PMID:25699703
NASA Astrophysics Data System (ADS)
Ghaedi, M.; Shojaeipour, E.; Ghaedi, A. M.; Sahraei, Reza
2015-05-01
In this study, copper nanowires loaded on activated carbon (Cu-NWs-AC) was used as novel efficient adsorbent for the removal of malachite green (MG) from aqueous solution. This new material was synthesized through simple protocol and its surface properties such as surface area, pore volume and functional groups were characterized with different techniques such XRD, BET and FESEM analysis. The relation between removal percentages with variables such as solution pH, adsorbent dosage (0.005, 0.01, 0.015, 0.02 and 0.1 g), contact time (1-40 min) and initial MG concentration (5, 10, 20, 70 and 100 mg/L) was investigated and optimized. A three-layer artificial neural network (ANN) model was utilized to predict the malachite green dye removal (%) by Cu-NWs-AC following conduction of 248 experiments. When the training of the ANN was performed, the parameters of ANN model were as follows: linear transfer function (purelin) at output layer, Levenberg-Marquardt algorithm (LMA), and a tangent sigmoid transfer function (tansig) at the hidden layer with 11 neurons. The minimum mean squared error (MSE) of 0.0017 and coefficient of determination (R2) of 0.9658 were found for prediction and modeling of dye removal using testing data set. A good agreement between experimental data and predicted data using the ANN model was obtained. Fitting the experimental data on previously optimized condition confirm the suitability of Langmuir isotherm models for their explanation with maximum adsorption capacity of 434.8 mg/g at 25 °C. Kinetic studies at various adsorbent mass and initial MG concentration show that the MG maximum removal percentage was achieved within 20 min. The adsorption of MG follows the pseudo-second-order with a combination of intraparticle diffusion model.
NASA Technical Reports Server (NTRS)
Srinivasan, R. S.; Simanonok, K. E.; Charles, J. B.
1994-01-01
Fluid loading (FL) before Shuttle reentry is a countermeasure currently in use by NASA to improve the orthostatic tolerance of astronauts during reentry and postflight. The fluid load consists of water and salt tablets equivalent to 32 oz (946 ml) of isotonic saline. However, the effectiveness of this countermeasure has been observed to decrease with the duration of spaceflight. The countermeasure's effectiveness may be improved by enhancing fluid retention using analogs of vasopressin such as lypressin (LVP) and desmopressin (dDAVP). In a computer simulation study reported previously, we attempted to assess the improvement in fluid retention obtained by the use of LVP administered before FL. The present study is concerned with the use of dDAVP. In a recent 24-hour, 6 degree head-down tilt (HDT) study involving seven men, dDAVP was found to improve orthostatic tolerance as assessed by both lower body negative pressure (LBNP) and stand tests. The treatment restored Luft's cumulative stress index (cumulative product of magnitude and duration of LBNP) to nearly pre-bedrest level. The heart rate was lower and stroke volume was marginally higher at the same LBNP levels with administration of dDAVP compared to placebo. Lower heart rates were also observed with dDAVP during stand test, despite the lower level of cardiovascular stress. These improvements were seen with only a small but significant increase in plasma volume of approximately 3 percent. This paper presents a computer simulation analysis of some of the results of this HDT study.
Woodruff, S.B.
1992-05-01
The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.
A Robust Load Shedding Strategy for Microgrid Islanding Transition
Liu, Guodong; Xiao, Bailu; Starke, Michael R; Ceylan, Oguzhan; Tomsovic, Kevin
2016-01-01
A microgrid is a group of interconnected loads and distributed energy resources. It can operate in either gridconnected mode to exchange energy with the main grid or run autonomously as an island in emergency mode. However, the transition of microgrid from grid-connected mode to islanded mode is usually associated with excessive load (or generation), which should be shed (or spilled). Under this condition, this paper proposes an robust load shedding strategy for microgrid islanding transition, which takes into account the uncertainties of renewable generation in the microgrid and guarantees the balance between load and generation after islanding. A robust optimization model is formulated to minimize the total operation cost, including fuel cost and penalty for load shedding. The proposed robust load shedding strategy works as a backup plan and updates at a prescribed interval. It assures a feasible operating point after islanding given the uncertainty of renewable generation. The proposed algorithm is demonstrated on a simulated microgrid consisting of a wind turbine, a PV panel, a battery, two distributed generators (DGs), a critical load and a interruptible load. Numerical simulation results validate the proposed algorithm.
Bankruptcy Problem Approach to Load-Shedding in Agent-Based Microgrid Operation
NASA Astrophysics Data System (ADS)
Kim, Hak-Man; Kinoshita, Tetsuo; Lim, Yujin; Kim, Tai-Hoon
Research, development, and demonstration projects on microgrids have been progressed in many countries. Furthermore, microgrids are expected to introduce into power grids as eco-friendly small-scale power grids in the near future. Load-shedding is a problem not avoided to meet power balance between power supply and power demand to maintain specific frequency such as 50 Hz or 60 Hz. Load-shedding causes consumers inconvenience and therefore should be performed minimally. Recently, agent-based microgrid operation has been studied and new algorithms for their autonomous operation including load-shedding has been required. The bankruptcy problem deals with distribution insufficient sources to claimants. In this paper, we approach the load-shedding problem as a bankruptcy problem and adopt the Talmud rule as an algorithm. Load-shedding using the Talmud rule is tested in islanded microgrid operation based on a multiagent system.
Method and apparatus for calibrating multi-axis load cells in a dexterous robot
NASA Technical Reports Server (NTRS)
Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)
2012-01-01
A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
LSPC is the Loading Simulation Program in C++, a watershed modeling system that includes streamlined Hydrologic Simulation Program Fortran (HSPF) algorithms for simulating hydrology, sediment, and general water quality on land as well as a simplified stream transport model. LSPC ...
Energy Aware Clustering Algorithms for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Rakhshan, Noushin; Rafsanjani, Marjan Kuchaki; Liu, Chenglian
2011-09-01
The sensor nodes deployed in wireless sensor networks (WSNs) are extremely power constrained, so maximizing the lifetime of the entire networks is mainly considered in the design. In wireless sensor networks, hierarchical network structures have the advantage of providing scalable and energy efficient solutions. In this paper, we investigate different clustering algorithms for WSNs and also compare these clustering algorithms based on metrics such as clustering distribution, cluster's load balancing, Cluster Head's (CH) selection strategy, CH's role rotation, node mobility, clusters overlapping, intra-cluster communications, reliability, security and location awareness.
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Erlebacher, G.
1994-01-01
While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.
NASA Astrophysics Data System (ADS)
Kong, Xiangxi; Zhang, Xueliang; Chen, Xiaozhe; Wen, Bangchun; Wang, Bo
2016-05-01
In this paper, phase and speed synchronization control of four eccentric rotors (ERs) driven by induction motors in a linear vibratory feeder with unknown time-varying load torques is studied. Firstly, the electromechanical coupling model of the linear vibratory feeder is established by associating induction motor's model with the dynamic model of the system, which is a typical under actuated model. According to the characteristics of the linear vibratory feeder, the complex control problem of the under actuated electromechanical coupling model converts to phase and speed synchronization control of four ERs. In order to keep the four ERs operating synchronously with zero phase differences, phase and speed synchronization controllers are designed by employing adaptive sliding mode control (ASMC) algorithm via a modified master-slave structure. The stability of the controllers is proved by Lyapunov stability theorem. The proposed controllers are verified by simulation via Matlab/Simulink program and compared with the conventional sliding mode control (SMC) algorithm. The results show the proposed controllers can reject the time-varying load torques effectively and four ERs can operate synchronously with zero phase differences. Moreover, the control performance is better than the conventional SMC algorithm and the chattering phenomenon is attenuated. Furthermore, the effects of reference speed and parametric perturbations are discussed to show the strong robustness of the proposed controllers. Finally, experiments on a simple vibratory test bench are operated by using the proposed controllers and without control, respectively, to validate the effectiveness of the proposed controllers further.
Evaluating the Impact of Solar Generation on Balancing Requirements in Southern Nevada System
Ma, Jian; Lu, Shuai; Etingov, Pavel V.; Makarov, Yuri V.
2012-07-26
Abstract—In this paper, the impacts of solar photovoltaic (PV) generation on balancing requirements including regulation and load following in the Southern Nevada balancing area are analyzed. The methodology is based on the “swinging door” algorithm and a probability box method developed by PNNL. The regulation and load following signals are mimicking the system’s scheduling and real-time dispatch processes. Load, solar PV generation and distributed PV generation (DG) data are used in the simulation. Different levels of solar PV generation and DG penetration profiles are used in the study. Sensitivity of the regulation requirements with respect to real-time solar PV generation forecast errors is analyzed.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng; Wei, Xuetao
2008-04-01
In this paper, we propose a novel robust routing algorithm based on Valiant load-balancing under the model of polyhedral uncertainty (i.e., hose uncertainty model) for WDM (wavelength division multiplexing) mesh networks. Valiant load-balanced robust routing algorithm constructs the stable virtual topology on which any traffic patterns under the hose uncertainty model can be efficiently routed. Considering there are multi-granularity connection requests in WDM mesh networks, we propose the method called hose-model separation to solve the problem for the proposed algorithm. Our goal is to minimize total network cost when constructing the stable virtual topology that assures robust routing for the hose model in WDM mesh networks. A mathematical formulation (integer linear programming, ILP) about Valiant load-balanced robust routing algorithm is presented. Two fast heuristic approaches are also proposed and evaluated. We compare the network throughput of the virtual topology constructed by the proposed algorithm with that of the traditional traffic grooming algorithm under the same total network cost by computer simulation.
Baby Carriage: Infants Walking with Loads
ERIC Educational Resources Information Center
Garciaguirre, Jessie S.; Adolph, Karen E.; Shrout, Patrick E.
2007-01-01
Maintaining balance is a central problem for new walkers. To examine how infants cope with the additional balance control problems induced by load carriage, 14-month-olds were loaded with 15% of their body weight in shoulder-packs. Both symmetrical and asymmetrical loads disrupted alternating gait patterns and caused less mature footfall patterns.…
Hydraulic Calibrator for Strain-Gauge Balances
NASA Technical Reports Server (NTRS)
Skelly, Kenneth; Ballard, John
1987-01-01
Instrument for calibrating strain-gauge balances uses hydraulic actuators and load cells. Eliminates effects of nonparallelism, nonperpendicularity, and changes of cable directions upon vector sums of applied forces. Errors due to cable stretching, pulley friction, and weight inaccuracy also eliminated. New instrument rugged and transportable. Set up quickly. Developed to apply known loads to wind-tunnel models with encapsulated strain-gauge balances, also adapted for use in calibrating dynamometers, load sensors on machinery and laboratory instruments.
Frequency effects on the stability of a journal bearing for periodic loading
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Brewe, D. E.
1991-01-01
The stability of a journal bearing is numerically predicted when a unidirectional periodic external load is applied. The analysis is performed using a cavitation algorithm, which mimics the Jakobsson-Floberg and Olsson (JFO) theory by accounting for the mass balance through the complete bearing. Hence, the history of the film is taken into consideration. The loading pattern is taken to be sinusoidal and the frequency of the load cycle is varied. The results are compared with the predictions using Reynolds boundary conditions for both film rupture and reformation. With such comparisons, the need for accurately predicting the cavitation regions for complex loading patterns is clearly demonstrated. For a particular frequency of loading, the effects of mass, amplitude of load variation and frequency of journal speed are also investigated. The journal trajectories, transient variations in fluid film forces, net surface velocity and minimum film thickness, and pressure profiles are also presented.
... a new type of balance therapy using computerized, virtual reality. UPMC associate professor Susan Whitney, Ph.D., developed ... a virtual grocery store in the university's Medical Virtual Reality Center. Patients walk on a treadmill and safely ...
... Current Issue Past Issues Special Section: Focus on Communication Balancing Acts Past Issues / Fall 2008 Table of ... from the National Institute on Deafness and Other Communication Disorders (NIDCD). It involves simulated trips down the ...
Yu, Zhenhua; Fu, Xiao; Cai, Yuanli; Vuran, Mehmet C
2011-01-01
A reliable energy-efficient multi-level routing algorithm in wireless sensor networks is proposed. The proposed algorithm considers the residual energy, number of the neighbors and centrality of each node for cluster formation, which is critical for well-balanced energy dissipation of the network. In the algorithm, a knowledge-based inference approach using fuzzy Petri nets is employed to select cluster heads, and then the fuzzy reasoning mechanism is used to compute the degree of reliability in the route sprouting tree from cluster heads to the base station. Finally, the most reliable route among the cluster heads can be constructed. The algorithm not only balances the energy load of each node but also provides global reliability for the whole network. Simulation results demonstrate that the proposed algorithm effectively prolongs the network lifetime and reduces the energy consumption. PMID:22163802
Yu, Zhenhua; Fu, Xiao; Cai, Yuanli; Vuran, Mehmet C.
2011-01-01
A reliable energy-efficient multi-level routing algorithm in wireless sensor networks is proposed. The proposed algorithm considers the residual energy, number of the neighbors and centrality of each node for cluster formation, which is critical for well-balanced energy dissipation of the network. In the algorithm, a knowledge-based inference approach using fuzzy Petri nets is employed to select cluster heads, and then the fuzzy reasoning mechanism is used to compute the degree of reliability in the route sprouting tree from cluster heads to the base station. Finally, the most reliable route among the cluster heads can be constructed. The algorithm not only balances the energy load of each node but also provides global reliability for the whole network. Simulation results demonstrate that the proposed algorithm effectively prolongs the network lifetime and reduces the energy consumption. PMID:22163802
Lifeline-based Global Load Balancing
Saraswat, Vijay A.; Kambadur, Prabhanjan; Kodali, Sreedhar; Grove, David; Krishnamoorthy, Sriram
2011-02-12
On shared-memory systems, Cilk-style work-stealing has been used to effectively parallelize irregular task-graph based applications such as Unbalanced Tree Search (UTS). There are two main difficulties in extending this approach to distributed memory. In the shared memory approach, thieves (nodes without work) constantly attempt to asynchronously steal work from randomly chosen victims until they find work. In distributed memory, thieves cannot autonomously steal work from a victim without disrupting its execution. When work is sparse, this results in performance degradation. In essence, a direct extension of traditional work-stealing to distributed memory violates the work-first principle underlying work-stealing. Further, thieves spend useless CPU cycles attacking victims that have no work, resulting in system inefficiencies in multi-programmed contexts. Second, it is non-trivial to detect active distributed termination (detect that programs at all nodes are looking for work, hence there is no work). This problem is well-studied and requires careful design for good performance. Unfortunately, in most existing languages/frameworks, application developers are forced to implement their own distributed termination detection. In this paper, we develop a simple set of ideas that allow work-stealing to be efficiently extended to distributed memory. First, we introduce lifeline graphs: low-degree, low diameter, fully connected directed graphs. Such graphs can be constructed from k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, it quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. A quiesced node is reactivated when work arrives from a lifeline and itself shares this work with those of its incoming lifelines that are activated. Termination occurs precisely when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination can be detected automatically using the finish construct -- no application code is necessary.
Dynamic Load Balancing for Adaptive Unstructured Grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for computing unsteady three-dimensional problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture phenomena of interest, such procedures make standard computational methods more cost effective. Highly refined meshes are required to accurately capture shock waves, contact discontinuities, vortices, and shear layers in fluid flow problems. Adaptive meshes have also proved to be useful in several other areas of computational science and engineering like computer vision and graphics, semiconductor device modeling, and structural mechanics. Local mesh adaptation provides the opportunity to obtain solutions that are comparable to those obtained on globally-refined grids but at a much lower cost. Additional information is contained in the original extended abstract.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2013-07-25
This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.
An improved method for determining force balance calibration accuracy
NASA Astrophysics Data System (ADS)
Ferris, Alice T.
The results of an improved statistical method used at Langley Research Center for determining and stating the accuracy of a force balance calibration are presented. The application of the method for initial loads, initial load determination, auxiliary loads, primary loads, and proof loads is described. The data analysis is briefly addressed.
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficient implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.
David Chassin, Pavel Etingov
2013-04-30
The LMDT software automates the process of the load composite model data preparation in the format supported by the major power system software vendors (GE and Siemens). Proper representation of the load composite model in power system dynamic analysis is very important. Software tools for power system simulation like GE PSLF and Siemens PSSE already include algorithms for the load composite modeling. However, these tools require that the input information on composite load to be provided in custom formats. Preparation of this data is time consuming and requires multiple manual operations. The LMDT software enables to automate this process. Software is designed to generate composite load model data. It uses the default load composition data, motor information, and bus information as an input. Software processes the input information and produces load composition model. Generated model can be stored in .dyd format supported by GE PSLF package or .dyr format supported by Siemens PSSE package.
2013-04-30
The LMDT software automates the process of the load composite model data preparation in the format supported by the major power system software vendors (GE and Siemens). Proper representation of the load composite model in power system dynamic analysis is very important. Software tools for power system simulation like GE PSLF and Siemens PSSE already include algorithms for the load composite modeling. However, these tools require that the input information on composite load to bemore » provided in custom formats. Preparation of this data is time consuming and requires multiple manual operations. The LMDT software enables to automate this process. Software is designed to generate composite load model data. It uses the default load composition data, motor information, and bus information as an input. Software processes the input information and produces load composition model. Generated model can be stored in .dyd format supported by GE PSLF package or .dyr format supported by Siemens PSSE package.« less
NASA Technical Reports Server (NTRS)
1988-01-01
TherEx Inc.'s AT-1 Computerized Ataxiameter precisely evaluates posture and balance disturbances that commonly accompany neurological and musculoskeletal disorders. Complete system includes two-strain gauged footplates, signal conditioning circuitry, a computer monitor, printer and a stand-alone tiltable balance platform. AT-1 serves as assessment tool, treatment monitor, and rehabilitation training device. It allows clinician to document quantitatively the outcome of treatment and analyze data over time to develop outcome standards for several classifications of patients. It can evaluate specifically the effects of surgery, drug treatment, physical therapy or prosthetic devices.
ERIC Educational Resources Information Center
Mills, Allan
2014-01-01
Theory predicts that an egg-shaped body should rest in stable equilibrium when on its side, balance vertically in metastable equilibrium on its broad end and be completely unstable on its narrow end. A homogeneous solid egg made from wood, clay or plastic behaves in this way, but a real egg will not stand on either end. It is shown that this…
Balance (or Vestibular) Rehabilitation
... for the Public / Hearing and Balance Balance (or Vestibular) Rehabilitation Audiologic (hearing), balance, and medical diagnostic tests help indicate whether you are a candidate for vestibular (balance) rehabilitation. Vestibular rehabilitation is an individualized balance ...
ERIC Educational Resources Information Center
O'Dell, Robin S.
2012-01-01
There are two primary interpretations of the mean: as a leveler of data (Uccellini 1996, pp. 113-114) and as a balance point of a data set. Typically, both interpretations of the mean are ignored in elementary school and middle school curricula. They are replaced with a rote emphasis on calculation using the standard algorithm. When students are…
NASA Astrophysics Data System (ADS)
Shakerin, Said
2013-12-01
The ordinary 12-oz beverage cans in the figures below are not held up with any props or glue. The bottom of such cans is stepped at its circumference for better stacking. When this kind of can is tilted, as shown in Fig. 1, the outside corners of the step touch the surface beneath, providing an effective contact about 1 cm wide. Because the contact is relatively wide and the geometry is symmetrical, it is easy to balance an empty can by simply adding an appropriate amount of water so that the overall center of mass is located directly above the contact. In fact, any amount of water between about 40 and 210 mL will work. A computational animation of this trick by Sijia Liang and Bruce Atwood that shows center of mass as a function of amount of added water is available at http://demonstrations.wolfram.com. Once there, search "balancing can."
NASA Technical Reports Server (NTRS)
Huguet, L
1921-01-01
The authors argue that the center of gravity has a preponderating influence on the longitudinal stability of an airplane in flight, but that manufacturers, although aware of this influence, are still content to apply empirical rules to the balancing of their airplanes instead of conducting wind tunnel tests. The author examines the following points: 1) longitudinal stability, in flight, of a glider with coinciding centers; 2) the influence exercised on the stability of flight by the position of the axis of thrust with respect to the center of gravity and the whole of the glider; 3) the stability on the ground before taking off, and the influence of the position of the landing gear. 4) the influence of the elements of the glider on the balance, the possibility of sometimes correcting defective balance, and the valuable information given on this point by wind tunnel tests; 5) and a brief examination of the equilibrium of power in horizontal flight, where the conditions of stability peculiar to this kind of flight are added to previously existing conditions of the stability of the glider, and interfere in fixing the safety limits of certain evolutions.
A data-parallel algorithm for three-dimensional Delaunay triangulation and its implementation
Teng, Y.A.; Sullivan, F.; Beichl, I.; Puppo, E.
1993-12-31
In this paper, the authors present a parallel algorithm for constructing the Delaunay triangulation of a set of vertices in three-dimensional space. The algorithm achieves a high degree of parallelism by starting the construction from every vertex and expanding over all open faces thereafter. In the expansion of open faces, the search is made faster by using a bucketing technique. The algorithm is designed under a data-parallel paradigm. It uses segmented list structures and virtual processing for load-balancing. As a result, the algorithm achieves a fast running time and good scalability over a wide range of problem sizes and machine sizes. They also incorporate a topological check to eliminate inconsistencies due to degeneracies and numerical error. The algorithm is implemented on Connection Machines CM-2 and CM-5, and experimental results are presented.
Grande, J A; Andújar, J M; Aroba, J; de la Torre, M L; Beltrán, R
2005-04-01
In the present work, Acid Mine Drainage (AMD) processes in the Chorrito Stream, which flows into the Cobica River (Iberian Pyrite Belt, Southwest Spain) are characterized by means of clustering techniques based on fuzzy logic. Also, pH behavior in contrast to precipitation is clearly explained, proving that the influence of rainfall inputs on the acidity and, as a result, on the metal load of a riverbed undergoing AMD processes highly depends on the moment when it occurs. In general, the riverbed dynamic behavior is the response to the sum of instant stimuli produced by isolated rainfall, the seasonal memory depending on the moment of the target hydrological year and, finally, the own inertia of the river basin, as a result of an accumulation process caused by age-long mining activity. PMID:15798799
Elastomeric load sharing device
NASA Technical Reports Server (NTRS)
Isabelle, Charles J. (Inventor); Kish, Jules G. (Inventor); Stone, Robert A. (Inventor)
1992-01-01
An elastomeric load sharing device, interposed in combination between a driven gear and a central drive shaft to facilitate balanced torque distribution in split power transmission systems, includes a cylindrical elastomeric bearing and a plurality of elastomeric bearing pads. The elastomeric bearing and bearing pads comprise one or more layers, each layer including an elastomer having a metal backing strip secured thereto. The elastomeric bearing is configured to have a high radial stiffness and a low torsional stiffness and is operative to radially center the driven gear and to minimize torque transfer through the elastomeric bearing. The bearing pads are configured to have a low radial and torsional stiffness and a high axial stiffness and are operative to compressively transmit torque from the driven gear to the drive shaft. The elastomeric load sharing device has spring rates that compensate for mechanical deviations in the gear train assembly to provide balanced torque distribution between complementary load paths of split power transmission systems.
NASA Astrophysics Data System (ADS)
Dilek, Murat
Distribution system analysis and design has experienced a gradual development over the past three decades. The once loosely assembled and largely ad hoc procedures have been progressing toward being well-organized. The increasing power of computers now allows for managing the large volumes of data and other obstacles inherent to distribution system studies. A variety of sophisticated optimization methods, which were impossible to conduct in the past, have been developed and successfully applied to distribution systems. Among the many procedures that deal with making decisions about the state and better operation of a distribution system, two decision support procedures will be addressed in this study: phase balancing and phase prediction. The former recommends re-phasing of single- and double-phase laterals in a radial distribution system in order to improve circuit loss while also maintaining/improving imbalances at various balance point locations. Phase balancing calculations are based on circuit loss information and current magnitudes that are calculated from a power flow solution. The phase balancing algorithm is designed to handle time-varying loads when evaluating phase moves that will result in improved circuit losses over all load points. Applied to radial distribution systems, the phase prediction algorithm attempts to predict the phases of single- and/or double phase laterals that have no phasing information previously recorded by the electric utility. In such an attempt, it uses available customer data and kW/kVar measurements taken at various locations in the system. It is shown that phase balancing is a special case of phase prediction. Building on the phase balancing and phase prediction design studies, this work introduces the concept of integrated design, an approach for coordinating the effects of various design calculations. Integrated design considers using results of multiple design applications rather than employing a single application for a
NASA Technical Reports Server (NTRS)
Duval, R. W.; Bahrami, M.
1985-01-01
The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.
An Energy-Efficient, Application-Oriented Control Algorithm for MAC Protocols in WSN
NASA Astrophysics Data System (ADS)
Li, Deliang; Peng, Fei; Qian, Depei
Energy efficiency has been a main concern in wireless sensor networks where Medium Access Control (MAC) protocol plays an important role. However, current MAC protocols designed for energy saving have seldom considered multiple applications coexisting in WSN with variation of traffic load dynamics and different QoS requirements. In this paper, we propose an adaptive control algorithm at MAC layer to promote energy efficiency. We focus on the tradeoff relation between collisions and control overhead as a reflection of traffic load and propose to balance the tradeoff under the constraints of QoS options. We integrate the algorithm into S-MAC and verify it through NS-2 platform. The results demonstrate the algorithm achieves observable improvement in energy performance while meeting QoS requirement for different coexisting applications in comparison with S-MAC.
Spletzer, Barry L.
2001-01-01
A load cell combines the outputs of a plurality of strain gauges to measure components of an applied load. Combination of strain gauge outputs allows measurement of any of six load components without requiring complex machining or mechanical linkages to isolate load components. An example six axis load cell produces six independent analog outputs which can be combined to determine any one of the six general load components.
Spletzer, B.L.
1998-12-15
A load cell combines the outputs of a plurality of strain gauges to measure components of an applied load. Combination of strain gauge outputs allows measurement of any of six load components without requiring complex machining or mechanical linkages to isolate load components. An example six axis load cell produces six independent analog outputs, each directly proportional to one of the six general load components. 16 figs.
Spletzer, Barry L.
1998-01-01
A load cell combines the outputs of a plurality of strain gauges to measure components of an applied load. Combination of strain gauge outputs allows measurement of any of six load components without requiring complex machining or mechanical linkages to isolate load components. An example six axis load cell produces six independent analog outputs, each directly proportional to one of the six general load components.
Strain gage balances and buffet gages
NASA Technical Reports Server (NTRS)
Ferris, A. T.
1983-01-01
One-piece strain gage force balances were developed for use in the National Transonic Facility (NTF). This was accomplished by studying the effects of the cryogenic environment on materials, strain gages, cements, solders, and moisture proofing agents, and selecting those that minimized strain gage output changes due to temperature. In addition, because of the higher loads that may be imposed by the NTF, these balances are designed to carry a larger load for a given diameter than conventional balances. Full cryogenic calibrations were accomplished, and wind tunnel results that were obtained from the Langley 0-3-Meter Transonic Cryogenic Tunnel were used to verify laboratory test results.
Liesen, R.J.; Strand, R.K.; Pedersen, C.O.
1998-10-01
Two new methods for calculating cooling loads have just been introduced. The first algorithm, called the heat balance (HB) method, is a complete formulation of fundamental heat balance principles. The second is called the radiant time series (RTS) method. While based on the HB method, the RTS method is an approximate procedure that separates some of the processes to better show the influence of individual heat gain components. In the HB method, all of the heat transfer mechanisms participate in three simultaneous heat balances: the balance on the outside face of all the building elements that enclose the space, the balance on the inside face of the building elements, and the balance between the surfaces inside the space and the zone air. The focus of this paper is on the second heat balance. It has been customary to define a radiative/convective split for the heat introduced into a zone from such sources as equipment, lights, people, etc. The radiative part is then distributed over the surfaces within the zone in some prescribed manner, and the convective part is assumed to go immediately into the air. Simplified techniques simply cannot accurately portray the complex interaction of building surfaces, so previously used load calculation procedures were not up to the task of analyzing the effect of internal load radiant/convective split variation. This paper will present an investigation of the influence of the radiative/convective split on cooling loads obtained using the heat balance procedure. It will begin with an overview of the model used for a heat balance procedure and then present an exhaustive case study of the effects of changing the mode split on load calculations for Wedge 1 of the Pentagon building.
Alternating direction method for balanced image restoration.
Xie, Shoulie; Rahardja, Susanto
2012-11-01
This paper presents an efficient algorithm for solving a balanced regularization problem in the frame-based image restoration. The balanced regularization is usually formulated as a minimization problem, involving an l(2) data-fidelity term, an l(1) regularizer on sparsity of frame coefficients, and a penalty on distance of sparse frame coefficients to the range of the frame operator. In image restoration, the balanced regularization approach bridges the synthesis-based and analysis-based approaches, and balances the fidelity, sparsity, and smoothness of the solution. Our proposed algorithm for solving the balanced optimal problem is based on a variable splitting strategy and the classical alternating direction method. This paper shows that the proposed algorithm is fast and efficient in solving the standard image restoration with balanced regularization. More precisely, a regularized version of the Hessian matrix of the l(2) data-fidelity term is involved, and by exploiting the related fast tight Parseval frame and the special structures of the observation matrices, the regularized Hessian matrix can perform quite efficiently for the frame-based standard image restoration applications, such as circular deconvolution in image deblurring and missing samples in image inpainting. Numerical simulations illustrate the efficiency of our proposed algorithm in the frame-based image restoration with balanced regularization. PMID:22752137
Static calibration of the RSRA active-isolator rotor balance system
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
1987-01-01
The Rotor Systems Research Aircraft (RSRA) active-isolator system is designed to reduce rotor vibrations transmitted to the airframe and to simultaneously measure all six forces and moments generated by the rotor. These loads are measured by using a combination of load cells, strain gages, and hydropneumatic active isolators with built-in pressure gages. The first static calibration of the complete active-isolator rotor balance system was performed in l983 to verify its load-measurement capabilities. Analysis of the data included the use of multiple linear regressions to determine calibration matrices for different data sets and a hysteresis-removal algorithm to estimate in-flight measurement errors. Results showed that the active-isolator system can fulfill most performance predictions. The results also suggested several possible improvements to the system.
Beard, R.A.
1990-03-01
The purpose of this thesis is to explore the methods used to parallelize NP-complete problems and the degree of improvement that can be realized using a distributed parallel processor to solve these combinatoric problems. Common NP-complete problem characteristics such as a priori reductions, use of partial-state information, and inhomogeneous searches are identified and studied. The set covering problem (SCP) is implemented for this research because many applications such as information retrieval, task scheduling, and VLSI expression simplification can be structured as an SCP problem. In addition, its generic NP-complete common characteristics are well documented and a parallel implementation has not been reported. Parallel programming design techniques involve decomposing the problem and developing the parallel algorithms. The major components of a parallel solution are developed in a four phase process. First, a meta-level design is accomplished using an appropriate design language such as UNITY. Then, the UNITY design is transformed into an algorithm and implementation specific to a distributed architecture. Finally, a complexity analysis of the algorithm is performed. the a priori reductions are divided-and-conquer algorithms; whereas, the search for the optimal set cover is accomplished with a branch-and-bound algorithm. The search utilizes a global best cost maintained at a central location for distribution to all processors. Three methods of load balancing are implemented and studied: coarse grain with static allocation of the search space, fine grain with dynamic allocation, and dynamic load balancing.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Ferris, A. T. Judy
1999-01-01
This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.
Dynamic Layered Dual-Cluster Heads Routing Algorithm Based on Krill Herd Optimization in UWSNs.
Jiang, Peng; Feng, Yang; Wu, Feng; Yu, Shanen; Xu, Huan
2016-01-01
Aimed at the limited energy of nodes in underwater wireless sensor networks (UWSNs) and the heavy load of cluster heads in clustering routing algorithms, this paper proposes a dynamic layered dual-cluster routing algorithm based on Krill Herd optimization in UWSNs. Cluster size is first decided by the distance between the cluster head nodes and sink node, and a dynamic layered mechanism is established to avoid the repeated selection of the same cluster head nodes. Using Krill Herd optimization algorithm selects the optimal and second optimal cluster heads, and its Lagrange model directs nodes to a high likelihood area. It ultimately realizes the functions of data collection and data transition. The simulation results show that the proposed algorithm can effectively decrease cluster energy consumption, balance the network energy consumption, and prolong the network lifetime. PMID:27589744
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2011-12-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2012-01-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
Structural dynamics payload loads estimates
NASA Technical Reports Server (NTRS)
Engels, R. C.
1982-01-01
Methods for the prediction of loads on large space structures are discussed. Existing approaches to the problem of loads calculation are surveyed. A full scale version of an alternate numerical integration technique to solve the response part of a load cycle is presented, and a set of short cut versions of the algorithm developed. The implementation of these techniques using the software package developed is discussed.
Balanced input-output assignment
NASA Technical Reports Server (NTRS)
Gawronski, W.; Hadaegh, F. Y.
1989-01-01
Actuator/sensor locations and balanced representations of linear systems are considered for a given set of controllability and observability grammians. The case of equally controlled and observed states is given special attention. The assignability of grammians is examined, and the conditions for their existence are presented, along with several algorithms for their determination. Although an arbitrary positive semidefinite matrix is not always assignable, the identity grammian is shown to be always assignable. The results are extended to the case of flexible structures.
Parallel Computing Strategies for Irregular Algorithms
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)
2002-01-01
Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
Strain-gage applications in wind tunnel balances
NASA Astrophysics Data System (ADS)
Mole, P. J.
1990-10-01
Six-component balances used in wind tunnels for precision measurements of air loads on scale models of aircraft and missiles are reviewed. A beam moment-type balance, two-shell balance consisting of an outer shell and inner rod, and air-flow balances used in STOL aircraft configurations are described. The design process, fabrication, gaging, single-gage procedure, and calibration of balances are outlined, and emphasis is placed on computer stress programs and data-reduction computer programs. It is pointed out that these wind-tunnel balances are used in applications for full-scale flight vehicles. Attention is given to a standard two-shell booster balance and an adaptation of a wind-tunnel balance employed to measure the simulated distributed launch loads of a payload in the Space Shuttle.
A new distributed systems scheduling algorithm: a swarm intelligence approach
NASA Astrophysics Data System (ADS)
Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi
2011-12-01
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
Technology Transfer Automated Retrieval System (TEKTRAN)
The suspended load of rivers and streams consists of the sediments that are kept in the water column by the upward components of the flow velocity. Suspended load may be divided into cohesive and non-cohesive loads which are primarily discriminated by sediment particle size. Non-cohesive sediment ...
Ocean Tide Loading Computation
NASA Technical Reports Server (NTRS)
Agnew, Duncan Carr
2005-01-01
September 15,2003 through May 15,2005 This grant funds the maintenance, updating, and distribution of programs for computing ocean tide loading, to enable the corrections for such loading to be more widely applied in space- geodetic and gravity measurements. These programs, developed under funding from the CDP and DOSE programs, incorporate the most recent global tidal models developed from Topex/Poscidon data, and also local tide models for regions around North America; the design of the algorithm and software makes it straightforward to combine local and global models.
Efficient algorithms for processing remotely sensed imagery
NASA Astrophysics Data System (ADS)
Zhang, Zengyan
At regional and global scales, satellite-based sensors are the primary source of information to study the Earth's environment, as they provide the needed dynamic temporal view of the Earth's surface. We focus this dissertation on the development of efficient methodologies and algorithms to generate custom tailored data products using Global Area Coverage (GAC) data from the NOAA Advanced Very High Resolution Radiometer (AVHRR) sensor. Furthermore, we show the retrieval of the global Bidirectional Reflectance Distribution Function (BRDF) and albedo of the Earth land surface using Pathfinder AVHRR Land (PAL) data sets which can be generated using our system. These are the first algorithms to retrieve such land surface properties on a global scale. We start by describing a software system Kronos, which allows the generation of a rich set of data products that can be easily specified through a Java interface by scientists wishing to carry out Earth system modeling or analysis. Kronos is based on a flexible methodology and consists of four major components: ingest and preprocessing, indexing and storage, search and processing engine, and a Java interface. Then efficient algorithms for custom-tailored AVHRR data products generation are developed, implemented, and tested on the UMIACS high performance computer systems. A major challenge lies in the efficient processing, storage, and archival of the available large amounts of data in such a way that pertinent information can be extracted with ease. Finally high performance algorithms are developed to retrieve global BRDF and albedo in the red and near-infrared wavelengths using three widely different models with multiangular, multi-temporal, and multi-band PAL data. Given the volume of data involved (about 27 GBytes), I/O time as well as the overall computational complexity are minimized. The algorithms access the global data once, followed by a redistribution of land pixel data to balance the computational loads among the
Crane, N K; Parsons, I D; Hjelmstad, K D
2002-03-21
Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Strain-gage balance calibration of a magnetic suspension and balance system
NASA Astrophysics Data System (ADS)
Roberts, Paul W.; Tcheng, Ping
A load calibration of the NASA 13-in magnetic suspension and balance system (MSBS) is described. The calibration procedure was originally intended to establish the empirical relationship between the coil currents and the external loads (forces and moments) applied to a magnetically suspended calibrator. However, it was discovered that the performance of a strain-gage balance is not affected when subjected to the magnetic environment of the MSBS. The use of strain-gage balances greatly reduces the effort required to perform a current-vs.-load calibration as external loads can be directly inferred from the balance outputs while a calibrator is suspended in MSBS. It is conceivable that in the future such a calibration could become unnecessary, since an even more important application for the use of a strain-gage balance in MSBS environment is the acquisition of precision aerodynamic force and moment data by telemetering the balance outputs from a suspended model/core/balance during wind tunnel tests.
Strain-gage balance calibration of a magnetic suspension and balance system
NASA Technical Reports Server (NTRS)
Roberts, Paul W.; Tcheng, Ping
1987-01-01
A load calibration of the NASA 13-in magnetic suspension and balance system (MSBS) is described. The calibration procedure was originally intended to establish the empirical relationship between the coil currents and the external loads (forces and moments) applied to a magnetically suspended calibrator. However, it was discovered that the performance of a strain-gage balance is not affected when subjected to the magnetic environment of the MSBS. The use of strain-gage balances greatly reduces the effort required to perform a current-vs.-load calibration as external loads can be directly inferred from the balance outputs while a calibrator is suspended in MSBS. It is conceivable that in the future such a calibration could become unnecessary, since an even more important application for the use of a strain-gage balance in MSBS environment is the acquisition of precision aerodynamic force and moment data by telemetering the balance outputs from a suspended model/core/balance during wind tunnel tests.
Parallel global optimization with the particle swarm algorithm
Schutte, J. F.; Reinbolt, J. A.; Fregly, B. J.; Haftka, R. T.; George, A. D.
2007-01-01
SUMMARY Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima—large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available. PMID:17891226
Coupled cluster algorithms for networks of shared memory parallel processors
NASA Astrophysics Data System (ADS)
Bentz, Jonathan L.; Olson, Ryan M.; Gordon, Mark S.; Schmidt, Michael W.; Kendall, Ricky A.
2007-05-01
As the popularity of using SMP systems as the building blocks for high performance supercomputers increases, so too increases the need for applications that can utilize the multiple levels of parallelism available in clusters of SMPs. This paper presents a dual-layer distributed algorithm, using both shared-memory and distributed-memory techniques to parallelize a very important algorithm (often called the "gold standard") used in computational chemistry, the single and double excitation coupled cluster method with perturbative triples, i.e. CCSD(T). The algorithm is presented within the framework of the GAMESS [M.W. Schmidt, K.K. Baldridge, J.A. Boatz, S.T. Elbert, M.S. Gordon, J.J. Jensen, S. Koseki, N. Matsunaga, K.A. Nguyen, S. Su, T.L. Windus, M. Dupuis, J.A. Montgomery, General atomic and molecular electronic structure system, J. Comput. Chem. 14 (1993) 1347-1363]. (General Atomic and Molecular Electronic Structure System) program suite and the Distributed Data Interface [M.W. Schmidt, G.D. Fletcher, B.M. Bode, M.S. Gordon, The distributed data interface in GAMESS, Comput. Phys. Comm. 128 (2000) 190]. (DDI), however, the essential features of the algorithm (data distribution, load-balancing and communication overhead) can be applied to more general computational problems. Timing and performance data for our dual-level algorithm is presented on several large-scale clusters of SMPs.
A simple method for wind tunnel balance calibration including non-linear interaction terms
NASA Astrophysics Data System (ADS)
Ramaswamy, M. A.; Srinivas, T.; Holla, V. S.
The conventional method for calibrating wind tunnel balances to obtain the coupled linear and nonlinear interaction terms requires the application of combinations of pure components of the loads on the calibration body compensating the deflection of the balance. For a six-component balance, this calls for a complex loading system and an arrangement to translate and tilt the balance support about all three axes. A simple method called the least-square method is illustrated for a three-component balance. The simplicity arises from the fact that application of the pure components of the loads or reorientation of the balance is not required. A single load is applied that has various components whose magnitudes can be easily found knowing the orientation of the calibration body under load and the point of application of the load. The coefficients are obtained by using the least-square-fit approach to match the outputs obtained for various combinations of load.
NASA Astrophysics Data System (ADS)
Leduhovsky, G. V.; Zhukov, V. P.; Barochkin, E. V.; Zimin, A. P.; Razinkov, A. A.
2015-08-01
The problem of striking material and energy balances from the data received by thermal power plant computerized automation systems from the technical accounting systems with the accuracy determined by the metrological characteristics of serviceable calibrated instruments is formulated using the mathematical apparatus of ridge regression method. A graph theory based matrix model of material and energy flows in systems having an intricate structure is proposed, using which it is possible to formalize the solution of a particular practical problem at the stage of constructing the system model. The problem of striking material and energy balances is formulated taking into account different degrees of trustworthiness with which the initial flow rates of coolants and their thermophysical parameters were determined, as well as process constraints expressed in terms of balance correlations on mass and energy for individual system nodes or for any combination thereof. Analytic and numerical solutions of the problem are proposed in different versions of its statement differing from each other in the adopted assumptions and considered constraints. It is shown how the procedure for striking material and energy balances from the results of measuring the flows of feed water and steam in the thermal process circuit of a combined heat and power plant affects the calculation accuracy of specific fuel rates for supplying heat and electricity. It has been revealed that the nominal values of indicators and the fuel saving or overexpenditure values associated with these indicators are the most dependent parameters. In calculating these quantities using different balance striking procedures, an error may arise the value of which is commensurable with the power plant thermal efficiency margin stipulated by the regulatory-technical documents on using fuel. The study results were used for substantiating the choice of stating the problem of striking material and fuel balances, as well as
Split torque transmission load sharing
NASA Technical Reports Server (NTRS)
Krantz, T. L.; Rashidi, M.; Kish, J. G.
1992-01-01
Split torque transmissions are attractive alternatives to conventional planetary designs for helicopter transmissions. The split torque designs can offer lighter weight and fewer parts but have not been used extensively for lack of experience, especially with obtaining proper load sharing. Two split torque designs that use different load sharing methods have been studied. Precise indexing and alignment of the geartrain to produce acceptable load sharing has been demonstrated. An elastomeric torque splitter that has large torsional compliance and damping produces even better load sharing while reducing dynamic transmission error and noise. However, the elastomeric torque splitter as now configured is not capable over the full range of operating conditions of a fielded system. A thrust balancing load sharing device was evaluated. Friction forces that oppose the motion of the balance mechanism are significant. A static analysis suggests increasing the helix angle of the input pinion of the thrust balancing design. Also, dynamic analysis of this design predicts good load sharing and significant torsional response to accumulative pitch errors of the gears.
Partial Storage Optimization and Load Control Strategy of Cloud Data Centers
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444
Partial storage optimization and load control strategy of cloud data centers.
Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444
ERIC Educational Resources Information Center
Claxton, David B.; Troy, Maridy; Dupree, Sarah
2006-01-01
Most authorities consider balance to be a component of skill-related physical fitness. Balance, however, is directly related to health, especially for older adults. Falls are a leading cause of injury and death among the elderly. Improved balance can help reduce falls and contribute to older people remaining physically active. Balance is a…
... My Go4Life Get Free Stuff Be a Partner Balance Improve Your Balance Each year, more than 2 million older Americans ... types of exercise — endurance , strength , balance, and flexibility . Balance Stand on One Foot Heel-to-Toe Walk ...
ERIC Educational Resources Information Center
White, Richard
2007-01-01
The review by Black and Wiliam of national systems makes clear the complexity of assessment, and identifies important issues. One of these is "balance": balance between local and central responsibilities, balance between the weights given to various purposes of schooling, balance between weights for various functions of assessment, and balance…
Dynamic balance improvement program
NASA Technical Reports Server (NTRS)
Butner, M. F.
1983-01-01
The reduction of residual unbalance in the space shuttle main engine (SSME) high pressure turbopump rotors was addressed. Elastic rotor response to unbalance and balancing requirements, multiplane and in housing balancing, and balance related rotor design considerations were assessed. Recommendations are made for near term improvement of the SSME balancing and for future study and development efforts.
Automatic load sharing in inverter modules
NASA Technical Reports Server (NTRS)
Nagano, S.
1979-01-01
Active feedback loads transistor equally with little power loss. Circuit is suitable for balancing modular inverters in spacecraft, computer power supplies, solar-electric power generators, and electric vehicles. Current-balancing circuit senses differences between collector current for power transistor and average value of load currents for all power transistors. Principle is effective not only in fixed duty-cycle inverters but also in converters operating at variable duty cycles.
Combinatorial Algorithms to Enable Computational Science and Engineering: The CSCAPES Institute
Pothen, Alex
2015-01-16
This final progress report summarizes the work accomplished at the Combinatorial Scientific Computing and Petascale Simulations Institute. We developed Zoltan, a parallel mesh partitioning library that made use of accurate hyeprgraph models to provide load balancing in mesh-based computations. We developed several graph coloring algorithms for computing Jacobian and Hessian matrices and organized them into a software package called ColPack. We developed parallel algorithms for graph coloring and graph matching problems, and also designed multi-scale graph algorithms. Three PhD students graduated, six more are continuing their PhD studies, and four postdoctoral scholars were advised. Six of these students and Fellows have joined DOE Labs (Sandia, Berkeley, as staff scientists or as postdoctoral scientists. We also organized the SIAM Workshop on Combinatorial Scientific Computing (CSC) in 2007, 2009, and 2011 to continue to foster the CSC community.
An Energy-Aware Multipath Routing Algorithm in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Kim, Moonseong; Jeong, Euihoon; Bang, Young-Cheol; Hwang, Soyoung; Shin, Changsub; Jin, Gwang-Ja; Kim, Bongsoo
One of the major challenges facing the design of a routing protocol for Wireless Sensor Networks (WSNs) is to find the most reliable path between the source and sink node. Furthermore, a routing protocol for WSN should be well aware of sensor limitations. In this paper, we present an energy efficient, scalable, and distributed node disjoint multipath routing algorithm. The proposed algorithm, the Energy-aware Multipath Routing Algorithm (EMRA), adjusts traffic flows via a novel load balancing scheme. EMRA has a higher average node energy efficiency, lower control overhead, and a shorter average delay than those of well-known previous works. Moreover, since EMRA takes into consideration network reliability, it is useful for delivering data in unreliable environments.
Validation of a robotic balance system for investigations in the control of human standing balance.
Luu, Billy L; Huryn, Thomas P; Van der Loos, H F Machiel; Croft, Elizabeth A; Blouin, Jean-Sébastien
2011-08-01
Previous studies have shown that human body sway during standing approximates the mechanics of an inverted pendulum pivoted at the ankle joints. In this study, a robotic balance system incorporating a Stewart platform base was developed to provide a new technique to investigate the neural mechanisms involved in standing balance. The robotic system, programmed with the mechanics of an inverted pendulum, controlled the motion of the body in response to a change in applied ankle torque. The ability of the robotic system to replicate the load properties of standing was validated by comparing the load stiffness generated when subjects balanced their own body to the robot's mechanical load programmed with a low (concentrated-mass model) or high (distributed-mass model) inertia. The results show that static load stiffness was not significantly (p > 0.05) different for standing and the robotic system. Dynamic load stiffness for the robotic system increased with the frequency of sway, as predicted by the mechanics of an inverted pendulum, with the higher inertia being accurately matched to the load properties of the human body. This robotic balance system accurately replicated the physical model of standing and represents a useful tool to simulate the dynamics of a standing person. PMID:21511567
A multiobjective scatter search algorithm for fault-tolerant NoC mapping optimisation
NASA Astrophysics Data System (ADS)
Le, Qianqi; Yang, Guowu; Hung, William N. N.; Zhang, Xinpeng; Fan, Fuyou
2014-08-01
Mapping IP cores to an on-chip network is an important step in Network-on-Chip (NoC) design and affects the performance of NoC systems. A mapping optimisation algorithm and a fault-tolerant mechanism are proposed in this article. The fault-tolerant mechanism and the corresponding routing algorithm can recover NoC communication from switch failures, while preserving high performance. The mapping optimisation algorithm is based on scatter search (SS), which is an intelligent algorithm with a powerful combinatorial search ability. To meet the requests of the NoC mapping application, the standard SS is improved for multiple objective optimisation. This method helps to obtain high-performance mapping layouts. The proposed algorithm was implemented on the Embedded Systems Synthesis Benchmarks Suite (E3S). Experimental results show that this optimisation algorithm achieves low-power consumption, little communication time, balanced link load and high reliability, compared to particle swarm optimisation and genetic algorithm.
An efficient QoS-aware routing algorithm for LEO polar constellations
NASA Astrophysics Data System (ADS)
Tian, Xin; Pham, Khanh; Blasch, Erik; Tian, Zhi; Shen, Dan; Chen, Genshe
2013-05-01
In this work, a Quality of Service (QoS)-aware routing (QAR) algorithm is developed for Low-Earth Orbit (LEO) polar constellations. LEO polar orbits are the only type of satellite constellations where inter-plane inter-satellite links (ISLs) are implemented in real world. The QAR algorithm exploits features of the topology of the LEO satellite constellation, which makes it more efficient than general shortest path routing algorithms such as Dijkstra's or extended Bellman-Ford algorithms. Traffic density, priority, and error QoS requirements on communication delays can be easily incorporated into the QAR algorithm through satellite distances. The QAR algorithm also supports efficient load balancing in the satellite network by utilizing the multiple paths from the source satellite to the destination satellite, and effectively lowers the rate of network congestion. The QAR algorithm supports a novel robust routing scheme in LEO polar constellation, which is able to significantly reduce the impact of inter-satellite link (ISL) congestions on QoS in terms of communication delay and jitter.
Computer Applications in Balancing Chemical Equations.
ERIC Educational Resources Information Center
Kumar, David D.
2001-01-01
Discusses computer-based approaches to balancing chemical equations. Surveys 13 methods, 6 based on matrix, 2 interactive programs, 1 stand-alone system, 1 developed in algorithm in Basic, 1 based on design engineering, 1 written in HyperCard, and 1 prepared for the World Wide Web. (Contains 17 references.) (Author/YDS)
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Gust loads. 23.425 Section 23.425 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Horizontal Stabilizing and Balancing Surfaces § 23.425 Gust loads....
Monte-Carlo Simulation Balancing in Practice
NASA Astrophysics Data System (ADS)
Huang, Shih-Chieh; Coulom, Rémi; Lin, Shun-Shii
Simulation balancing is a new technique to tune parameters of a playout policy for a Monte-Carlo game-playing program. So far, this algorithm had only been tested in a very artificial setting: it was limited to 5×5 and 6×6 Go, and required a stronger external program that served as a supervisor. In this paper, the effectiveness of simulation balancing is demonstrated in a more realistic setting. A state-of-the-art program, Erica, learned an improved playout policy on the 9×9 board, without requiring any external expert to provide position evaluations. The evaluations were collected by letting the program analyze positions by itself. The previous version of Erica learned pattern weights with the minorization-maximization algorithm. Thanks to simulation balancing, its playing strength was improved from a winning rate of 69% to 78% against Fuego 0.4.
A Dynamic Era-Based Time-Symmetric Block Time-Step Algorithm with Parallel Implementations
NASA Astrophysics Data System (ADS)
Kaplan, Murat; Saygin, Hasan
2012-06-01
The time-symmetric block time-step (TSBTS) algorithm is a newly developed efficient scheme for N-body integrations. It is constructed on an era-based iteration. In this work, we re-designed the TSBTS integration scheme with a dynamically changing era size. A number of numerical tests were performed to show the importance of choosing the size of the era, especially for long-time integrations. Our second aim was to show that the TSBTS scheme is as suitable as previously known schemes for developing parallel N-body codes. In this work, we relied on a parallel scheme using the copy algorithm for the time-symmetric scheme. We implemented a hybrid of data and task parallelization for force calculation to handle load balancing problems that can appear in practice. Using the Plummer model initial conditions for different numbers of particles, we obtained the expected efficiency and speedup for a small number of particles. Although parallelization of the direct N-body codes is negatively affected by the communication/calculation ratios, we obtained good load-balanced results. Moreover, we were able to conserve the advantages of the algorithm (e.g., energy conservation for long-term simulations).
Concurrent algorithms for a mobile robot vision system
Jones, J.P.; Mann, R.C.
1988-01-01
The application of computer vision to mobile robots has generally been hampered by insufficient on-board computing power. The advent of VLSI-based general purpose concurrent multiprocessor systems promises to give mobile robots an increasing amount of on-board computing capability, and to allow computation intensive data analysis to be performed without high-bandwidth communication with a remote system. This paper describes the integration of robot vision algorithms on a 3-dimensional hypercube system on-board a mobile robot developed at Oak Ridge National Laboratory. The vision system is interfaced to navigation and robot control software, enabling the robot to maneuver in a laboratory environment, to find a known object of interest and to recognize the object's status based on visual sensing. We first present the robot system architecture and the principles followed in the vision system implementation. We then provide some benchmark timings for low-level image processing routines, describe a concurrent algorithm with load balancing for the Hough transform, a new algorithm for binary component labeling, and an algorithm for the concurrent extraction of region features from labeled images. This system analyzes a scene in less than 5 seconds and has proven to be a valuable experimental tool for research in mobile autonomous robots. 9 refs., 1 fig., 3 tabs.
A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database
Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; Shephard, Mark S.
2013-01-01
Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less
Weight/balance portable test equipment
Whitlock, R.W.
1994-11-03
This document shows the general layout, and gives a part description for the weight/balance test equipment. This equipment will aid in the regulation of the leachate loading of tanker trucks. The report contains four drawings with part specifications. The leachate originates from lined trenches.
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
The water balance of the Skylab crew was analyzed. Evaporative water loss using a whole body input/output balance equation, water, body tissue, and energy balance was analyzed. The approach utilizes the results of several major Skylab medical experiments. Subsystems were designed for the use of the software necessary for the analysis. A partitional water balance that graphically depicts the changes due to water intake is presented. The energy balance analysis determines the net available energy to the individual crewman during any period. The balances produce a visual description of the total change of a particular body component during the course of the mission. The information is salvaged from metabolic balance data if certain techniques are used to reduce errors inherent in the balance method.
Pollock, A S; Durward, B R; Rowe, P J; Paul, J P
2000-08-01
Balance is a term frequently used by health professionals working in a wide variety of clinical specialities. There is no universally accepted definition of human balance, or related terms. This article identifies mechanical definitions of balance and introduces clinical definitions of balance and postural control. Postural control is defined as the act of maintaining, achieving or restoring a state of balance during any posture or activity. Postural control strategies may be either predictive or reactive, and may involve either a fixed-support or a change-in-support response. Clinical tests of balance assess different components of balance ability. Health professionals should select clinical assessments based on a sound knowledge and understanding of the classification of balance and postural control strategies. PMID:10945424
Polarization-balanced beamsplitter
Decker, D.E.
1998-02-17
A beamsplitter assembly is disclosed that includes several beamsplitter cubes arranged to define a plurality of polarization-balanced light paths. Each polarization-balanced light path contains one or more balanced pairs of light paths, where each balanced pair of light paths includes either two transmission light paths with orthogonal polarization effects or two reflection light paths with orthogonal polarization effects. The orthogonal pairing of said transmission and reflection light paths cancels polarization effects otherwise caused by beamsplitting. 10 figs.
Polarization-balanced beamsplitter
Decker, Derek E.
1998-01-01
A beamsplitter assembly that includes several beamsplitter cubes arranged to define a plurality of polarization-balanced light paths. Each polarization-balanced light path contains one or more balanced pairs of light paths, where each balanced pair of light paths includes either two transmission light paths with orthogonal polarization effects or two reflection light paths with orthogonal polarization effects. The orthogonal pairing of said transmission and reflection light paths cancels polarization effects otherwise caused by beamsplitting.
NASA Technical Reports Server (NTRS)
Warner, Edward P; Norton, F H
1920-01-01
Report embodies a description of the balance designed and constructed for the use of the National Advisory Committee for Aeronautics at Langley Field, and also deals with the theory of sensitivity of balances and with the errors to which wind tunnel balances of various types are subject.
ERIC Educational Resources Information Center
Larson, Bonnie
2001-01-01
Discusses coaching for balance the integration of the whole self: physical (body), intellectual (mind), spiritual (soul), and emotional (heart). Offers four ways to identify problems and tell whether someone is out of balance and four coaching techniques for creating balance. (Contains 11 references.) (JOW)
Inducer Hydrodynamic Load Measurement Devices
NASA Technical Reports Server (NTRS)
Skelley, Stephen E.; Zoladz, Thomas F.; Turner, Jim (Technical Monitor)
2002-01-01
Marshall Space Flight Center (MSFC) has demonstrated two measurement devices for sensing and resolving the hydrodynamic loads on fluid machinery. The first - a derivative of the six-component wind tunnel balance - senses the forces and moments on the rotating device through a weakened shaft section instrumented with a series of strain gauges. This rotating balance was designed to directly measure the steady and unsteady hydrodynamic loads on an inducer, thereby defining both the amplitude and frequency content associated with operating in various cavitation modes. The second device - a high frequency response pressure transducer surface mounted on a rotating component - was merely an extension of existing technology for application in water. MSFC has recently completed experimental evaluations of both the rotating balance and surface-mount transducers in a water test loop. The measurement bandwidth of the rotating balance was severely limited by the relative flexibility of the device itself, resulting in an unexpectedly low structural bending mode and invalidating the higher-frequency response data. Despite these limitations, measurements confirmed that the integrated loads on the four-bladed inducer respond to both cavitation intensity and cavitation phenomena. Likewise, the surface-mount pressure transducers were subjected to a range of temperatures and flow conditions in a non-rotating environment to record bias shifts and transfer functions between the transducers and a reference device. The pressure transducer static performance was within manufacturer's specifications and dynamic response accurately followed that of the reference.
Reconceptualizing balance: attributes associated with balance performance.
Thomas, Julia C; Odonkor, Charles; Griffith, Laura; Holt, Nicole; Percac-Lima, Sanja; Leveille, Suzanne; Ni, Pensheng; Latham, Nancy K; Jette, Alan M; Bean, Jonathan F
2014-09-01
Balance tests are commonly used to screen for impairments that put older adults at risk for falls. The purpose of this study was to determine the attributes that were associated with balance performance as measured by the Frailty and Injuries: Cooperative Studies of Intervention Techniques (FICSIT) balance test. This study was a cross-sectional secondary analysis of baseline data from a longitudinal cohort study, the Boston Rehabilitative Impairment Study of the Elderly (Boston RISE). Boston RISE was performed in an outpatient rehabilitation research center and evaluated Boston area primary care patients aged 65 to 96 (N=364) with self-reported difficulty or task-modification climbing a flight of stairs or walking 1/2 of a mile. The outcome measure was standing balance as measured by the FICSIT-4 balance assessment. Other measures included: self-efficacy, pain, depression, executive function, vision, sensory loss, reaction time, kyphosis, leg range of motion, trunk extensor muscle endurance, leg strength and leg velocity at peak power. Participants were 67% female, had an average age of 76.5 (±7.0) years, an average of 4.1 (±2.0) chronic conditions, and an average FICSIT-4 score of 6.7 (±2.2) out of 9. After adjusting for age and gender, attributes significantly associated with balance performance were falls self-efficacy, trunk extensor muscle endurance, sensory loss, and leg velocity at peak power. FICSIT-4 balance performance is associated with a number of behavioral and physiologic attributes, many of which are amenable to rehabilitative treatment. Our findings support a consideration of balance as multidimensional activity as proposed by the current International Classification of Functioning, Disability, and Health (ICF) model. PMID:24952097
Ohlinger, L.A.
1958-10-01
A device is presented for loading or charging bodies of fissionable material into a reactor. This device consists of a car, mounted on tracks, into which the fissionable materials may be placed at a remote area, transported to the reactor, and inserted without danger to the operating personnel. The car has mounted on it a heavily shielded magazine for holding a number of the radioactive bodies. The magazine is of a U-shaped configuration and is inclined to the horizontal plane, with a cap covering the elevated open end, and a remotely operated plunger at the lower, closed end. After the fissionable bodies are loaded in the magazine and transported to the reactor, the plunger inserts the body at the lower end of the magazine into the reactor, then is withdrawn, thereby allowing gravity to roll the remaining bodies into position for successive loading in a similar manner.
Algorithms for parallel flow solvers on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1995-01-01
The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
An efficient distributed algorithm for constructing spanning trees in wireless sensor networks.
Lachowski, Rosana; Pellenz, Marcelo E; Penna, Manoel C; Jamhour, Edgard; Souza, Richard D
2015-01-01
Monitoring and data collection are the two main functions in wireless sensor networks (WSNs). Collected data are generally transmitted via multihop communication to a special node, called the sink. While in a typical WSN, nodes have a sink node as the final destination for the data traffic, in an ad hoc network, nodes need to communicate with each other. For this reason, routing protocols for ad hoc networks are inefficient for WSNs. Trees, on the other hand, are classic routing structures explicitly or implicitly used in WSNs. In this work, we implement and evaluate distributed algorithms for constructing routing trees in WSNs described in the literature. After identifying the drawbacks and advantages of these algorithms, we propose a new algorithm for constructing spanning trees in WSNs. The performance of the proposed algorithm and the quality of the constructed tree were evaluated in different network scenarios. The results showed that the proposed algorithm is a more efficient solution. Furthermore, the algorithm provides multiple routes to the sensor nodes to be used as mechanisms for fault tolerance and load balancing. PMID:25594593
An Efficient Distributed Algorithm for Constructing Spanning Trees in Wireless Sensor Networks
Lachowski, Rosana; Pellenz, Marcelo E.; Penna, Manoel C.; Jamhour, Edgard; Souza, Richard D.
2015-01-01
Monitoring and data collection are the two main functions in wireless sensor networks (WSNs). Collected data are generally transmitted via multihop communication to a special node, called the sink. While in a typical WSN, nodes have a sink node as the final destination for the data traffic, in an ad hoc network, nodes need to communicate with each other. For this reason, routing protocols for ad hoc networks are inefficient for WSNs. Trees, on the other hand, are classic routing structures explicitly or implicitly used in WSNs. In this work, we implement and evaluate distributed algorithms for constructing routing trees in WSNs described in the literature. After identifying the drawbacks and advantages of these algorithms, we propose a new algorithm for constructing spanning trees in WSNs. The performance of the proposed algorithm and the quality of the constructed tree were evaluated in different network scenarios. The results showed that the proposed algorithm is a more efficient solution. Furthermore, the algorithm provides multiple routes to the sensor nodes to be used as mechanisms for fault tolerance and load balancing. PMID:25594593
ERIC Educational Resources Information Center
Csernus, Marilyn
Carbohydrate loading is a frequently used technique to improve performance by altering an athlete's diet. The objective is to increase glycogen stored in muscles for use in prolonged strenuous exercise. For two to three days, the athlete consumes a diet that is low in carbohydrates and high in fat and protein while continuing to exercise and…
Aging effects on the structure underlying balance abilities tests.
Urushihata, Toshiya; Kinugasa, Takashi; Soma, Yuki; Miyoshi, Hirokazu
2010-01-01
Balance impairment is one of the biggest risk factors for falls reducing inactivity, resulting in nursing care. Therefore, balance ability is crucial to maintain the activities of independent daily living of older adults. Many tests to assess balance ability have been developed. However, few reports reveal the structure underlying results of balance performance tests comparing young and older adults. Covariance structure analysis is a tool that is used to test statistically whether factorial structure fits data. This study examined aging effects on the factorial structure underlying balance performance tests. Participants comprised 60 healthy young women aged 22 ± 3 years (young group) and 60 community-dwelling older women aged 69 ± 5 years (older group). Six balance tests: postural sway, one-leg standing, functional reach, timed up and go (TUG), gait, and the EquiTest were employed. Exploratory factor analysis revealed that three clearly interpretable factors were extracted in the young group. The first factor had high loadings on the EquiTest, and was interpreted as 'Reactive'. The second factor had high loadings on the postural sway test, and was interpreted as 'Static'. The third factor had high loadings on TUG and gait test, and was interpreted as 'Dynamic'. Similarly, three interpretable factors were extracted in the older group. The first factor had high loadings on the postural sway test and the EquiTest and therefore was interpreted as 'Static and Reactive'. The second factor, which had high loadings on the EquiTest, was interpreted as 'Reactive'. The third factor, which had high loadings on TUG and the gait test, was interpreted as 'Dynamic'. A covariance structure model was applied to the test data: the second-order factor was balance ability, and the first-order factors were static, dynamic and reactive factors which were assumed to be measured based on the six balance tests. Goodness-of-fit index (GFI) of the models were acceptable (young group, GFI
Aging Effects on the Structure Underlying Balance Abilities Tests
Kinugasa, Takashi; Soma, Yuki; Miyoshi, Hirokazu
2010-01-01
Balance impairment is one of the biggest risk factors for falls reducing inactivity, resulting in nursing care. Therefore, balance ability is crucial to maintain the activities of independent daily living of older adults. Many tests to assess balance ability have been developed. However, few reports reveal the structure underlying results of balance performance tests comparing young and older adults. Covariance structure analysis is a tool that is used to test statistically whether factorial structure fits data. This study examined aging effects on the factorial structure underlying balance performance tests. Participants comprised 60 healthy young women aged 22 ± 3 years (young group) and 60 community-dwelling older women aged 69 ± 5 years (older group). Six balance tests: postural sway, one-leg standing, functional reach, timed up and go (TUG), gait, and the EquiTest were employed. Exploratory factor analysis revealed that three clearly interpretable factors were extracted in the young group. The first factor had high loadings on the EquiTest, and was interpreted as ‘Reactive’. The second factor had high loadings on the postural sway test, and was interpreted as ‘Static’. The third factor had high loadings on TUG and gait test, and was interpreted as ‘Dynamic’. Similarly, three interpretable factors were extracted in the older group. The first factor had high loadings on the postural sway test and the EquiTest and therefore was interpreted as ‘Static and Reactive’. The second factor, which had high loadings on the EquiTest, was interpreted as ‘Reactive’. The third factor, which had high loadings on TUG and the gait test, was interpreted as ‘Dynamic’. A covariance structure model was applied to the test data: the second-order factor was balance ability, and the first-order factors were static, dynamic and reactive factors which were assumed to be measured based on the six balance tests. Goodness-of-fit index (GFI) of the models were
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-01-01
unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. Currently, uncertainties associated with wind and load forecasts, as well as uncertainties associated with random generator outages and unexpected disconnection of supply lines, are not taken into account in power grid operation. Thus, operators have little means to weigh the likelihood and magnitude of upcoming events of power imbalance. In this project, funded by the U.S. Department of Energy (DOE), a framework has been developed for incorporating uncertainties associated with wind and load forecast errors, unpredicted ramps, and forced generation disconnections into the energy management system (EMS) as well as generation dispatch and commitment applications. A new approach to evaluate the uncertainty ranges for the required generation performance envelope including balancing capacity, ramping capability, and ramp duration has been proposed. The approach includes three stages: forecast and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence levels. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis, incorporating all sources of uncertainties of both continuous (wind and load forecast errors) and discrete (forced generator outages and start-up failures) nature. A new method called the “flying brick” technique has been developed to evaluate the look-ahead required generation performance envelope for the worst case scenario within a user-specified confidence level. A self-validation algorithm has been developed to validate the accuracy of the confidence intervals.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
The parallelization of an advancing-front, all-quadrilateral meshing algorithm for adaptive analysis
Lober, R.R.; Tautges, T.J.; Cairncross, R.A.
1995-11-01
The ability to perform effective adaptive analysis has become a critical issue in the area of physical simulation. Of the multiple technologies required to realize a parallel adaptive analysis capability, automatic mesh generation is an enabling technology, filling a critical need in the appropriate discretization of a problem domain. The paving algorithm`s unique ability to generate a function-following quadrilateral grid is a substantial advantage in Sandia`s pursuit of a modified h-method adaptive capability. This characteristic combined with a strong transitioning ability allow the paving algorithm to place elements where an error function indicates more mesh resolution is needed. Although the original paving algorithm is highly serial, a two stage approach has been designed to parallelize the algorithm but also retain the nice qualities of the serial algorithm. The authors approach also allows the subdomain decomposition used by the meshing code to be shared with the finite element physics code, eliminating the need for data transfer across the processors between the analysis and remeshing steps. In addition, the meshed subdomains are adjusted with a dynamic load balancer to improve the original decomposition and maintain load efficiency each time the mesh has been regenerated. This initial parallel implementation assumes an approach of restarting the physics problem from time zero at each interaction, with a refined mesh adapting to the previous iterations objective function. The remeshing tools are being developed to enable real time remeshing and geometry regeneration. Progress on the redesign of the paving algorithm for parallel operation is discussed including extensions allowing adaptive control and geometry regeneration.
Loading relativistic Maxwell distributions in particle simulations
Zenitani, Seiji
2015-04-15
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ≈50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Loading relativistic Maxwell distributions in particle simulations
NASA Astrophysics Data System (ADS)
Zenitani, Seiji
2015-04-01
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ≈50 % for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Implementation and performance of a domain decomposition algorithm in Sisal
DeBoni, T.; Feo, J.; Rodrigue, G.; Muller, J.
1993-09-23
Sisal is a general-purpose functional language that hides the complexity of parallel processing, expedites parallel program development, and guarantees determinacy. Parallelism and management of concurrent tasks are realized automatically by the compiler and runtime system. Spatial domain decomposition is a widely-used method that focuses computational resources on the most active, or important, areas of a domain. Many complex programming issues are introduced in paralleling this method including: dynamic spatial refinement, dynamic grid partitioning and fusion, task distribution, data distribution, and load balancing. In this paper, we describe a spatial domain decomposition algorithm programmed in Sisal. We explain the compilation process, and present the execution performance of the resultant code on two different multiprocessor systems: a multiprocessor vector supercomputer, and cache-coherent scalar multiprocessor.
Applications of concurrent neuromorphic algorithms for autonomous robots
NASA Technical Reports Server (NTRS)
Barhen, J.; Dress, W. B.; Jorgensen, C. C.
1988-01-01
This article provides an overview of studies at the Oak Ridge National Laboratory (ORNL) of neural networks running on parallel machines applied to the problems of autonomous robotics. The first section provides the motivation for our work in autonomous robotics and introduces the computational hardware in use. Section 2 presents two theorems concerning the storage capacity and stability of neural networks. Section 3 presents a novel load-balancing algorithm implemented with a neural network. Section 4 introduces the robotics test bed now in place. Section 5 concerns navigation issues in the test-bed system. Finally, Section 6 presents a frequency-coded network model and shows how Darwinian techniques are applied to issues of parameter optimization and on-line design.
Study and Analyses on the Structural Performance of a Balance
NASA Technical Reports Server (NTRS)
Karkehabadi, R.; Rhew, R. D.; Hope, D. J.
2004-01-01
Strain-gauge balances for use in wind tunnels have been designed at Langley Research Center (LaRC) since its inception. Currently Langley has more than 300 balances available for its researchers. A force balance is inherently a critically stressed component due to the requirements of measurement sensitivity. The strain-gauge balances have been used in Langley s wind tunnels for a wide variety of aerodynamic tests, and the designs encompass a large array of sizes, loads, and environmental effects. There are six degrees of freedom that a balance has to measure. The balance s task to measure these six degrees of freedom has introduced challenging work in transducer development technology areas. As the emphasis increases on improving aerodynamic performance of all types of aircraft and spacecraft, the demand for improved balances is at the forefront. Force balance stress analysis and acceptance criteria are under review due to LaRC wind tunnel operational safety requirements. This paper presents some of the analyses and research done at LaRC that influence structural integrity of the balances. The analyses are helpful in understanding the overall behavior of existing balances and can be used in the design of new balances to enhance performance. Initially, a maximum load combination was used for a linear structural analysis. When nonlinear effects were encountered, the analysis was extended to include nonlinearities using MSC.Nastran . Because most of the balances are designed using Pro/Mechanica , it is desirable and efficient to use Pro/Mechanica for stress analysis. However, Pro/Mechanica is limited to linear analysis. Both Pro/Mechanica and MSC.Nastran are used for analyses in the present work. The structural integrity of balances and the possibility of modifying existing balances to enhance structural integrity are investigated.
Cellular accommodation and the response of bone to mechanical loading.
Schriefer, Jennifer L; Warden, Stuart J; Saxon, Leanne K; Robling, Alexander G; Turner, Charles H
2005-09-01
Several mathematical rules by which bone adapts to mechanical loading have been proposed. Previous work focused mainly on negative feedback models, e.g., bone adapts to increased loading after a minimum strain effective (MES) threshold has been reached. The MES algorithm has numerous caveats, so we propose a different model, according to which bone adapts to changes in its mechanical environment based on the principle of cellular accommodation. With the new algorithm we presume that strain history is integrated into cellular memory so that the reference state for adaptation is constantly changing. To test this algorithm, an experiment was performed in which the ulnae of Sprague-Dawley rats were loaded in axial compression. The animals received loading for 15 weeks with progressively decreasing loads, increasing loads, or a constant load. The results showed the largest increases in geometry in the decreasing load group, followed by the constant load group. Bone formation rates (BFRs) were significantly greater in the decreasing load group during the first 2 weeks of the study as compared to all other groups (P<0.05). After the first few weeks of mechanical loading, the BFR in the loaded ulnae returned to the values of the nonloaded ulnae. These experimental results closely fit the predicted results of the cellular accommodation algorithm. After the initial weeks of loading, bone stopped responding so the degree of adaptation was proportional to the initial peak load magnitude. PMID:16023471
Mullett, L.B.; Loach, B.G.; Adams, G.L.
1958-06-24
>Loaded waveguides are described for the propagation of electromagnetic waves with reduced phase velocities. A rectangular waveguide is dimensioned so as to cut-off the simple H/sub 01/ mode at the operating frequency. The waveguide is capacitance loaded, so as to reduce the phase velocity of the transmitted wave, by connecting an electrical conductor between directly opposite points in the major median plane on the narrower pair of waveguide walls. This conductor may take a corrugated shape or be an aperature member, the important factor being that the electrical length of the conductor is greater than one-half wavelength at the operating frequency. Prepared for the Second U.N. International ConferThe importance of nuclear standards is duscussed. A brief review of the international callaboration in this field is given. The proposal is made to let the International Organization for Standardization (ISO) coordinate the efforts from other groups. (W.D.M.)
Identifying Balance in a Balanced Scorecard System
ERIC Educational Resources Information Center
Aravamudhan, Suhanya; Kamalanabhan, T. J.
2007-01-01
In recent years, strategic management concepts seem to be gaining greater attention from the academicians and the practitioner's alike. Balanced Scorecard (BSC) concept is one such management concepts that has spread in worldwide business and consulting communities. The BSC translates mission and vision statements into a comprehensive set of…
Maximizing TDRS Command Load Lifetime
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2002-01-01
The GNC software onboard ISS utilizes TORS command loads, and a simplistic model of TORS orbital motion to generate onboard TORS state vectors. Each TORS command load contains five "invariant" orbital elements which serve as inputs to the onboard propagation algorithm. These elements include semi-major axis, inclination, time of last ascending node crossing, right ascension of ascending node, and mean motion. Running parallel to the onboard software is the TORS Command Builder Tool application, located in the JSC Mission Control Center. The TORS Command Builder Tool is responsible for building the TORS command loads using a ground TORS state vector, mirroring the onboard propagation algorithm, and assessing the fidelity of current TORS command loads onboard ISS. The tool works by extracting a ground state vector at a given time from a current TORS ephemeris, and then calculating the corresponding "onboard" TORS state vector at the same time using the current onboard TORS command load. The tool then performs a comparison between these two vectors and displays the relative differences in the command builder tool GUI. If the RSS position difference between these two vectors exceeds the tolerable lim its, a new command load is built using the ground state vector and uplinked to ISS. A command load's lifetime is therefore defined as the time from when a command load is built to the time the RSS position difference exceeds the tolerable limit. From the outset of TORS command load operations (STS-98), command load lifetime was limited to approximately one week due to the simplicity of both the onboard propagation algorithm, and the algorithm used by the command builder tool to generate the invariant orbital elements. It was soon desired to extend command load lifetime in order to minimize potential risk due to frequent ISS commanding. Initial studies indicated that command load lifetime was most sensitive to changes in mean motion. Finding a suitable value for mean motion
Linear and Nonlinear Analyses of a Wind-Tunnel Balance
NASA Technical Reports Server (NTRS)
Karkehabadi, R.; Rhew, R. D.
2004-01-01
The NASA Langley Research Center (LaRC) has been designing strain-gauge balances for utilization in wind tunnels since its inception. The utilization of balances span a wide variety of aerodynamic tests. A force balance is an inherently critically stressed component due to the requirements of measurement sensitivity. Force balance stress analysis and acceptance criteria are under review due to LaRC wind tunnel operational safety requirements. This paper presents some of the analyses done at NASA LaRC. Research and analyses were performed in order to investigate the structural integrity of the balances and better understand their performance. The analyses presented in this paper are helpful in understanding the overall behavior of an existing balance and can also be used in design of new balances to enhance their performance. As a first step, maximum load combination is used for linear structural analysis. When nonlinear effects are encountered, the analysis is extended to include the nonlinearities. Balance 1621 is typical for LaRC designed balances and was chosen for this study due to its traditional high load capacity, Figure 1. Maximum loading occurs when all 6 components are applied simultaneously with their maximum value allowed (limit load). This circumstance normally will not occur in the wind tunnel. However, if it occurs, is the balance capable of handling the loads with an acceptable factor of safety? Preliminary analysis using Pro/Mechanica indicated that this balance might experience nonlinearity. It was decided to analyze this balance by using NASTRAN so that a nonlinear analysis could be conducted. Balance 1621 was modeled and meshed in PATRAN for analysis in NASTRAN. The model from PATRAN/NASTRAN is compared to the one from Pro/Mechanica. For a complete analysis, it is necessary to consider all the load cases as well as use a dense mesh near all the edges. Because of computer limitations, it is not feasible to analyze model with the dense mesh near
ERIC Educational Resources Information Center
La Porta, Rafael; Lopez-de-Silanes, Florencio; Pop-Eleches, Cristian; Shleifer, Andrei
2004-01-01
In the Anglo-American constitutional tradition, judicial checks and balances are often seen as crucial guarantees of freedom. Hayek distinguishes two ways in which the judiciary provides such checks and balances: judicial independence and constitutional review. We create a new database of constitutional rules in 71 countries that reflect these…
Inevitability of Balance Restoration
2010-01-01
Prolonged imbalance between input and output of any element in a living organism is incompatible with life. The duration of imbalance varies, but eventually balance is achieved. This rule applies to any quantifiable element in a compartment of finite capacity. Transient discrepancies occur regularly, but given sufficient time, balance is always achieved, because permanent imbalance is impossible, and the mechanism for eventual restoration of balance is foolproof. The kidney is a central player for balance restoration of fluid and electrolytes, but the smartness of the kidney is not the reason for perfect balance. The kidney merely accelerates the process. The most crucial element of the control system is that discrepancy between intake and output inevitably leads to a change in total content of the element in the system, and uncorrected balance has a cumulative effect on the overall content of the element. In a living organism, the speed of restoration of balance depends on the permissible duration of imbalance without death or severe disability. The three main factors that influence the speed of balance restoration are: magnitude of flux, basal store, and capacity for additional storage. For most electrolytes, total capacity is such that a substantial discrepancy is not possible for more than a week or two. Most control mechanisms correct abnormality partially. The infinite gain control mechanism is unique in that abnormality is completely corrected upon completion of compensation. PMID:21468193
ERIC Educational Resources Information Center
Blakley, G. R.
1982-01-01
Reviews mathematical techniques for solving systems of homogeneous linear equations and demonstrates that the algebraic method of balancing chemical equations is a matter of solving a system of homogeneous linear equations. FORTRAN programs using this matrix method to chemical equation balancing are available from the author. (JN)
ERIC Educational Resources Information Center
Hines, Thomas E.
2011-01-01
Maintaining balance in leadership can be difficult because balance is affected by the personality, strengths, and attitudes of the leader as well as the complicated environment within and outside the community college itself. This article explores what being a leader at the community college means, what the threats are to effective leadership, and…
ERIC Educational Resources Information Center
Coulson, Eddie K.
2006-01-01
"The Technology Balance Beam" is designed to question the role of technology within school districts. This case study chronicles a typical school district in relation to the school district's implementation of technology beginning in the 1995-1996 school year. The fundamental question that this scenario raises is, What is the balance between…
ERIC Educational Resources Information Center
Mosey, Edward
1991-01-01
The booming economy of the Pacific Northwest region promotes the dilemma of balancing the need for increased electrical power with the desire to maintain that region's unspoiled natural environment. Pertinent factors discussed within the balance equation are population trends, economic considerations, industrial power requirements, and…
Balanced Literacy Instruction.
ERIC Educational Resources Information Center
Pressley, Michael; Roehrig, Alysia; Bogner, Kristen; Raphael, Lisa M.; Dolezal, Sara
2002-01-01
This article reviews the evidence for balanced literacy instruction in the elementary years. The case is made that the balanced instructional model is particularly appropriate and beneficial for students who have initial difficulties in learning to read and write. Key features of successful reading instruction programs are described. (Contains…
Optimum stacking sequence design of composite sandwich panel using genetic algorithms
NASA Astrophysics Data System (ADS)
Bir, Amarpreet Singh
Composite sandwich structures recently gained preference for various structural components over conventional metals and simple composite laminates in the aerospace industries. For most widely used composite sandwich structures, the optimization problems only requires the determination of the best stacking sequence and the number of laminae with different fiber orientations. Genetic algorithm optimization technique based on Darwin's theory of survival of the fittest and evolution is most suitable for solving such optimization problems. The present research work focuses on the stacking sequence optimization of composite sandwich panels with laminated face-sheets for both critical buckling load maximization and thickness minimization problems, subjected to bi-axial compressive loading. In the previous studies, only balanced and even-numbered simple composite laminate panels have been investigated ignoring the effects of bending-twisting coupling terms. The current work broadens the application of genetic algorithms to more complex composite sandwich panels with balanced, unbalanced, even and odd-numbered face-sheet laminates including the effects of bending-twisting coupling terms.
Development of a six component flexured two shell internal strain gage balance
NASA Astrophysics Data System (ADS)
Mole, P. J.
1993-01-01
The paper describes the development of a new wind tunnel balance designed to meet the load requirements of the new advanced aircraft. Based on the floating frame or two-shell concept, the Flexured Balance incorporates a separate axial element, thus allowing for higher load per unit diameter, reduced primary load interaction, and greater flexibility in load range selection. Described is the design process, fabrication, gaging, calibration results, and performance during tunnel testing of the first prototype balance. Supporting data and accuracies are provided.
Combinatorial optimization methods for disassembly line balancing
NASA Astrophysics Data System (ADS)
McGovern, Seamus M.; Gupta, Surendra M.
2004-12-01
Disassembly takes place in remanufacturing, recycling, and disposal with a line being the best choice for automation. The disassembly line balancing problem seeks a sequence which: minimizes workstations, ensures similar idle times, and is feasible. Finding the optimal balance is computationally intensive due to factorial growth. Combinatorial optimization methods hold promise for providing solutions to the disassembly line balancing problem, which is proven to belong to the class of NP-complete problems. Ant colony optimization, genetic algorithm, and H-K metaheuristics are presented and compared along with a greedy/hill-climbing heuristic hybrid. A numerical study is performed to illustrate the implementation and compare performance. Conclusions drawn include the consistent generation of optimal or near-optimal solutions, the ability to preserve precedence, the speed of the techniques, and their practicality due to ease of implementation.
Balanced Multiwavelets Based Digital Image Watermarking
NASA Astrophysics Data System (ADS)
Zhang, Na; Huang, Hua; Zhou, Quan; Qi, Chun
In this paper, an adaptive blind watermarking algorithm based on balanced multiwavelets transform is proposed. According to the properties of balanced multiwavelets and human vision system, a modified version of the well-established Lewis perceptual model is given. Therefore, the strength of embedded watermark is controlled by the local properties of the host image .The subbands of balanced multiwavelets transformation are similar to each other in the same scale, so the most similar subbands are chosen to embed the watermark by modifying the relation of the two subbands adaptively under the model, the watermark extraction can be performed without original image. Experimental results show that the watermarked images look visually identical to the original ones, and the watermark also successfully survives after image processing operations such as image cropping, scaling, filtering and JPEG compression.
NASA Astrophysics Data System (ADS)
Mozdgir, A.; Mahdavi, Iraj; Seyyedi, I.; Shiraqei, M. E.
2011-06-01
An assembly line is a flow-oriented production system where the productive units performing the operations, referred to as stations, are aligned in a serial manner. The assembly line balancing problem arises and has to be solved when an assembly line has to be configured or redesigned. The so-called simple assembly line balancing problem (SALBP), a basic version of the general problem, has attracted attention of researchers and practitioners of operations research for almost half a century. There are four types of objective functions which are considered to this kind of problem. The versions of SALBP may be complemented by a secondary objective which consists of smoothing station loads. Many heuristics have been proposed for the assembly line balancing problem due to its computational complexity and difficulty in identifying an optimal solution and so many heuristic solutions are supposed to solve this problem. In this paper a differential evolution algorithm is developed to minimize workload smoothness index in SALBP-2 and the algorithm parameters are optimized using Taguchi method.
Active balance system and vibration balanced machine
NASA Technical Reports Server (NTRS)
Qiu, Songgang (Inventor); Augenblick, John E. (Inventor); Peterson, Allen A. (Inventor); White, Maurice A. (Inventor)
2005-01-01
An active balance system is provided for counterbalancing vibrations of an axially reciprocating machine. The balance system includes a support member, a flexure assembly, a counterbalance mass, and a linear motor or an actuator. The support member is configured for attachment to the machine. The flexure assembly includes at least one flat spring having connections along a central portion and an outer peripheral portion. One of the central portion and the outer peripheral portion is fixedly mounted to the support member. The counterbalance mass is fixedly carried by the flexure assembly along another of the central portion and the outer peripheral portion. The linear motor has one of a stator and a mover fixedly mounted to the support member and another of the stator and the mover fixedly mounted to the counterbalance mass. The linear motor is operative to axially reciprocate the counterbalance mass.
Magnetic suspension and balance systems (MSBSs)
NASA Technical Reports Server (NTRS)
Britcher, Colin P.; Kilgore, Robert A.
1987-01-01
The problems of wind tunnel testing are outlined, with attention given to the problems caused by mechanical support systems, such as support interference, dynamic-testing restrictions, and low productivity. The basic principles of magnetic suspension are highlighted, along with the history of magnetic suspension and balance systems. Roll control, size limitations, high angle of attack, reliability, position sensing, and calibration are discussed among the problems and limitations of the existing magnetic suspension and balance systems. Examples of the existing systems are presented, and design studies for future systems are outlined. Problems specific to large-scale magnetic suspension and balance systems, such as high model loads, requirements for high-power electromagnets, high-capacity power supplies, highly sophisticated control systems and position sensors, and high costs are assessed.
Wallace, B.
1991-01-01
This book discusses the radiation effects on Drosophila. It was originally thought that irradiating Drosophila would decrease the average fitness of the population, thereby leading to information about the detrimental effects of mutations. Surprisingly, the fitness of the irradiated population turned out to be higher than that of the control population. The original motivation for the experiment was as a test of genetic load theory. The average fitness of a population is depressed by deleterious alleles held in the population by the balance between mutation and natural selection. The depression is called the genetic load of the population. The load dose not depend on the magnitude of the deleterious effect of alleles, but only on the mutation rate.
Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks
Zeng, Yali; Xu, Li; Chen, Zhide
2015-01-01
As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration. PMID:26703616
Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks.
Zeng, Yali; Xu, Li; Chen, Zhide
2015-01-01
As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration. PMID:26703616
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
Analytical study of pressure balancing in gas film seals.
NASA Technical Reports Server (NTRS)
Zuk, J.
1973-01-01
Proper pressure balancing of gas film seals requires knowledge of the pressure profile load factor (load factor) values for a given set of design conditions. In this study, the load factor is investigated for subsonic and choked flow conditions, laminar and turbulent flows, and various seal entrance conditions. Both parallel sealing surfaces and surfaces with small linear deformation were investigated. The load factor for subsonic flow depends strongly on pressure ratio; under choked flow conditions, however, the load factor is found to depend more strongly on film thickness and flow entrance conditions rather than pressure ratio. The importance of generating hydrodynamic forces to keep the seal balanced under severe and multipoint operation is also discussed.
Load Leveling Battery System Costs
1994-10-12
SYSPLAN evaluates capital investment in customer side of the meter load leveling battery systems. Such systems reduce the customer's monthly electrical demand charge by reducing the maximum power load supplied by the utility during the customer's peak demand. System equipment consists of a large array of batteries, a current converter, and balance of plant equipment and facilities required to support the battery and converter system. The system is installed on the customer's side of themore » meter and controlled and operated by the customer. Its economic feasibility depends largely on the customer's load profile. Load shape requirements, utility rate structures, and battery equipment cost and performance data serve as bases for determining whether a load leveling battery system is economically feasible for a particular installation. Life-cycle costs for system hardware include all costs associated with the purchase, installation, and operation of battery, converter, and balance of plant facilities and equipment. The SYSPLAN spreadsheet software is specifically designed to evaluate these costs and the reduced demand charge benefits; it completes a 20 year period life cycle cost analysis based on the battery system description and cost data. A built-in sensitivity analysis routine is also included for key battery cost parameters. The life cycle cost analysis spreadsheet is augmented by a system sizing routine to help users identify load leveling system size requirements for their facilities. The optional XSIZE system sizing spreadsheet which is included can be used to identify a range of battery system sizes that might be economically attractive. XSIZE output consisting of system operating requirements can then be passed by the temporary file SIZE to the main SYSPLAN spreadsheet.« less
Cook, G.; Brown, H.; Strawn, N.
1996-12-31
Nature seeks a balance. The global carbon cycle, in which carbon is exchanged between the atmosphere, biosphere, and oceans through natural processes such as absorption, photosynthesis, and respiration, is one of those balances. This constant exchange promotes an equilibrium in which atmospheric carbon dioxide is keep relatively steady over long periods of time. For the last 10,000 years, up to the 19th century, the global carbon cycle has maintained atmospheric concentrations of carbon dioxide between 260 and 290 ppm. This article discusses the disturbance of the balance, how ethanol fuels address the carbon dioxide imbalance, and a bioethanol strategy.
Consideration of Dynamical Balances
NASA Technical Reports Server (NTRS)
Errico, Ronald M.
2015-01-01
The quasi-balance of extra-tropical tropospheric dynamics is a fundamental aspect of nature. If an atmospheric analysis does not reflect such balance sufficiently well, the subsequent forecast will exhibit unrealistic behavior associated with spurious fast-propagating gravity waves. Even if these eventually damp, they can create poor background fields for a subsequent analysis or interact with moist physics to create spurious precipitation. The nature of this problem will be described along with the reasons for atmospheric balance and techniques for mitigating imbalances. Attention will be focused on fundamental issues rather than on recipes for various techniques.
NASA Technical Reports Server (NTRS)
1996-01-01
NeuroCom's Balance Master is a system to assess and then retrain patients with balance and mobility problems and is used in several medical centers. NeuroCom received assistance in research and funding from NASA, and incorporated technology from testing mechanisms for astronauts after shuttle flights. The EquiTest and Balance Master Systems are computerized posturography machines that measure patient responses to movement of a platform on which the subject is standing or sitting, then provide assessments of the patient's postural alignment and stability.
Forbes, G.B.; Lantigua, R.; Amatruda, J.M.; Lockwood, D.H.
1981-01-01
Six overweight adult subjects given a low calorie diet containing adequate amounts of nitrogen but subnormal amounts of potassium (K) were observed on the Clinical Research Center for periods of 29 to 40 days. Metabolic balance of potassium was measured together with frequent assays of total body K by /sup 40/K counting. Metabolic K balance underestimated body K losses by 11 to 87% (average 43%): the intersubject variability is such as to preclude the use of a single correction value for unmeasured losses in K balance studies.
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-09-01
features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. In this report, a new methodology to predict the uncertainty ranges for the required balancing capacity, ramping capability and ramp duration is presented. Uncertainties created by system load forecast errors, wind and solar forecast errors, generation forced outages are taken into account. The uncertainty ranges are evaluated for different confidence levels of having the actual generation requirements within the corresponding limits. The methodology helps to identify system balancing reserve requirement based on a desired system performance levels, identify system “breaking points”, where the generation system becomes unable to follow the generation requirement curve with the user-specified probability level, and determine the time remaining to these potential events. The approach includes three stages: statistical and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence intervals. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis incorporating all sources of uncertainty and parameters of a continuous (wind forecast and load forecast errors) and discrete (forced generator outages and failures to start up) nature. Preliminary simulations using California Independent System Operator (California ISO) real life data have shown the effectiveness of the proposed approach. A tool developed based on the new methodology described in this report will be integrated with the California ISO systems. Contractual work is currently in place to integrate the tool with the AREVA EMS system.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
The Challenge is to develop ideas for how NASA can turn available entry, descent, and landing balance mass on a future Mars mission into a scientific or technological payload. Proposed concepts sho...
ERIC Educational Resources Information Center
Willows, Dale
2002-01-01
Describes professional development program in Ontario school district to improve student reading and writing skills. Program used food-pyramid concepts to help teacher learn to provide a balanced and flexible approach to literacy instruction based on student needs. (PKP)
Fowler, Kimberly M.
2008-05-01
This essay is being proposed as part of a book titled: "Motherhood: The Elephant in the Laboratory." It offers professional and personal advice on how to balance working in the research field with a family life.
... They are in your blood, urine and body fluids. Maintaining the right balance of electrolytes helps your ... them from the foods you eat and the fluids you drink. Levels of electrolytes in your body ...
NASA Technical Reports Server (NTRS)
1991-01-01
Researchers at the Balance Function Laboratory and Clinic at the Minneapolis (MN) Neuroscience Institute on the Abbot Northwestern Hospital Campus are using a rotational chair (technically a "sinusoidal harmonic acceleration system") originally developed by NASA to investigate vestibular (inner ear) function in weightlessness to diagnose and treat patients with balance function disorders. Manufactured by ICS Medical Corporation, Schaumberg, IL, the chair system turns a patient and monitors his or her responses to rotational stimulation.
Exercise to Improve Your Balance
... nia.nih.gov/Go4Life Exercise to Improve Your Balance Having good balance is important for many everyday activities, such as ... fracture of the arm, hand, ankle, or hip. Balance exercises can help you prevent falls and avoid ...
Greenland Ice Sheet Mass Balance
NASA Technical Reports Server (NTRS)
Reeh, N.
1984-01-01
Mass balance equation for glaciers; areal distribution and ice volumes; estimates of actual mass balance; loss by calving of icebergs; hydrological budget for Greenland; and temporal variations of Greenland mass balance are examined.
Hill, James O.; Wyatt, Holly R.; Peters, John C.
2012-01-01
This paper describes the interplay among energy intake, energy expenditure and body energy stores and illustrates how an understanding of energy balance can help develop strategies to reduce obesity. First, reducing obesity will require modifying both energy intake and energy expenditure and not simply focusing on either alone. Food restriction alone will not be effective in reducing obesity if human physiology is biased toward achieving energy balance at a high energy flux (i.e. at a high level of energy intake and expenditure). In previous environments a high energy flux was achieved with a high level of physical activity but in today's sedentary environment it is increasingly achieved through weight gain. Matching energy intake to a high level of energy expenditure will likely be more a more feasible strategy for most people to maintain a healthy weight than restricting food intake to meet a low level of energy expenditure. Second, from an energy balance point of view we are likely to be more successful in preventing excessive weight gain than in treating obesity. This is because the energy balance system shows much stronger opposition to weight loss than to weight gain. While large behavior changes are needed to produce and maintain reductions in body weight, small behavior changes may be sufficient to prevent excessive weight gain. In conclusion, the concept of energy balance combined with an understanding of how the body achieves balance may be a useful framework in helping develop strategies to reduce obesity rates. PMID:22753534
Adaptive Vibration Reduction Controls for a Cryocooler With a Passive Balancer
NASA Technical Reports Server (NTRS)
Kopasakis, George; Cairelli, James E.; Traylor, Ryan M.
2001-01-01
In this paper an adaptive vibration reduction control (AVRC) design is described for a Stirling cryocooler combined with a passive balancer. The AVRC design was based on a mass-spring model of the cooler and balancer, and the AVRC algorithm described in this paper was based on an adaptive binary search. Results are shown comparing the baseline uncontrolled cooler with no balancer, the cooler with the balancer, and, finally, the cooler with the balancer and the AVRC. The comparison shows that it may be possible to meet stringent vibration reduction requirements without an active balancer.
NASA Astrophysics Data System (ADS)
Robinson, Ian A.
2014-04-01
The time is fast approaching when the SI unit of mass will cease to be based on a single material artefact and will instead be based upon the defined value of a fundamental constant—the Planck constant—h . This change requires that techniques exist both to determine the appropriate value to be assigned to the constant, and to measure mass in terms of the redefined unit. It is important to ensure that these techniques are accurate and reliable to allow full advantage to be taken of the stability and universality provided by the new definition and to guarantee the continuity of the world's mass measurements, which can affect the measurement of many other quantities such as energy and force. Up to now, efforts to provide the basis for such a redefinition of the kilogram were mainly concerned with resolving the discrepancies between individual implementations of the two principal techniques: the x-ray crystal density (XRCD) method [1] and the watt and joule balance methods which are the subject of this special issue. The first three papers report results from the NRC and NIST watt balance groups and the NIM joule balance group. The result from the NRC (formerly the NPL Mk II) watt balance is the first to be reported with a relative standard uncertainty below 2 × 10-8 and the NIST result has a relative standard uncertainty below 5 × 10-8. Both results are shown in figure 1 along with some previous results; the result from the NIM group is not shown on the plot but has a relative uncertainty of 8.9 × 10-6 and is consistent with all the results shown. The Consultative Committee for Mass and Related Quantities (CCM) in its meeting in 2013 produced a resolution [2] which set out the requirements for the number, type and quality of results intended to support the redefinition of the kilogram and required that there should be agreement between them. These results from NRC, NIST and the IAC may be considered to meet these requirements and are likely to be widely debated
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
NASA Technical Reports Server (NTRS)
Thompson, Bryan
2000-01-01
This is the final report for a project carried out to modify a manual commercial Cavendish Balance for automated use in cryostat. The scope of this project was to modify an off-the-shelf manually operated Cavendish Balance to allow for automated operation for periods of hours or days in cryostat. The purpose of this modification was to allow the balance to be used in the study of effects of superconducting materials on the local gravitational field strength to determine if the strength of gravitational fields can be reduced. A Cavendish Balance was chosen because it is a fairly simple piece of equipment for measuring gravity, one the least accurately known and least understood physical constants. The principle activities that occurred under this purchase order were: (1) All the components necessary to hold and automate the Cavendish Balance in a cryostat were designed. Engineering drawings were made of custom parts to be fabricated, other off-the-shelf parts were procured; (2) Software was written in LabView to control the automation process via a stepper motor controller and stepper motor, and to collect data from the balance during testing; (3)Software was written to take the data collected from the Cavendish Balance and reduce it to give a value for the gravitational constant; (4) The components of the system were assembled and fitted to a cryostat. Also the LabView hardware including the control computer, stepper motor driver, data collection boards, and necessary cabling were assembled; and (5) The system was operated for a number of periods, data collected, and reduced to give an average value for the gravitational constant.
A Root Zone Water Balance Algorithm for Educational Settings.
ERIC Educational Resources Information Center
Cahoon, Joel E.; Ferguson, Richard B.
1995-01-01
Describes a simple technique for monitoring root zone water status on demonstration project fields and incorporating the demonstration site results into workshop-type educational settings. Surveys indicate the presentation was well received by demonstration project cooperators and educators. (LZ)
Load Control System Reliability
Trudnowski, Daniel
2015-04-03
This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”
Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert
2013-01-01
Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.
14 CFR 23.427 - Unsymmetrical loads.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Balancing Surfaces § 23.427 Unsymmetrical loads. (a) Horizontal surfaces other than main wing and their..., wings, horizontal surfaces other than main wing, and fuselage shape: (1) 100 percent of the maximum... surfaces other than main wing having appreciable dihedral or supported by the vertical tail surfaces)...
14 CFR 23.427 - Unsymmetrical loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Balancing Surfaces § 23.427 Unsymmetrical loads. (a) Horizontal surfaces other than main wing and their..., wings, horizontal surfaces other than main wing, and fuselage shape: (1) 100 percent of the maximum... surfaces other than main wing having appreciable dihedral or supported by the vertical tail surfaces)...
14 CFR 23.427 - Unsymmetrical loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Balancing Surfaces § 23.427 Unsymmetrical loads. (a) Horizontal surfaces other than main wing and their..., wings, horizontal surfaces other than main wing, and fuselage shape: (1) 100 percent of the maximum... surfaces other than main wing having appreciable dihedral or supported by the vertical tail surfaces)...
14 CFR 23.427 - Unsymmetrical loads.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Balancing Surfaces § 23.427 Unsymmetrical loads. (a) Horizontal surfaces other than main wing and their..., wings, horizontal surfaces other than main wing, and fuselage shape: (1) 100 percent of the maximum... surfaces other than main wing having appreciable dihedral or supported by the vertical tail surfaces)...
14 CFR 23.427 - Unsymmetrical loads.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Balancing Surfaces § 23.427 Unsymmetrical loads. (a) Horizontal surfaces other than main wing and their..., wings, horizontal surfaces other than main wing, and fuselage shape: (1) 100 percent of the maximum... surfaces other than main wing having appreciable dihedral or supported by the vertical tail surfaces)...
An efficient parallel termination detection algorithm
Baker, A. H.; Crivelli, S.; Jessup, E. R.
2004-05-27
Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
The cryogenic balance design and balance calibration methods
NASA Astrophysics Data System (ADS)
Ewald, B.; Polanski, L.; Graewe, E.
1992-07-01
The current status of a program aimed at the development of a cryogenic balance for the European Transonic Wind Tunnel is reviewed. In particular, attention is given to the cryogenic balance design philosophy, mechanical balance design, reliability and accuracy, cryogenic balance calibration concept, and the concept of an automatic calibration machine. It is shown that the use of the automatic calibration machine will improve the accuracy of calibration while reducing the man power and time required for balance calibration.
Zhang, S. L.; Liu, Y.; Collins-McIntyre, L. J.; Hesjedal, T.; Zhang, J. Y.; Wang, S. G.; Yu, G. H.
2013-01-01
Magnetoresistance (MR) effects are at the heart of modern information technology. However, future progress of giant and tunnelling MR based storage and logic devices is limited by the usable MR ratios of currently about 200% at room-temperature. Colossal MR structures, on the other hand, achieve their high MR ratios of up to 106% only at low temperatures and high magnetic fields. We introduce the extraordinary Hall balance (EHB) and demonstrate room-temperature MR ratios in excess of 31,000%. The new device concept exploits the extraordinary Hall effect in two separated ferromagnetic layers with perpendicular anisotropy in which the Hall voltages can be configured to be carefully balanced or tipped out of balance. Reprogrammable logic and memory is realised using a single EHB element. PACS numbers: 85.75.Nn,85.70.Kh,72.15.Gd,75.60.Ej. PMID:23804036
Vibration balanced miniature loudspeaker
NASA Astrophysics Data System (ADS)
Schafer, David E.; Jiles, Mekell; Miller, Thomas E.; Thompson, Stephen C.
2002-11-01
The vibration that is generated by the receiver (loudspeaker) in a hearing aid can be a cause of feedback oscillation. Oscillation can occur if the microphone senses the receiver vibration at sufficient amplitude and appropriate phase. Feedback oscillation from this and other causes is a major problem for those who manufacture, prescribe, and use hearing aids. The receivers normally used in hearing aids are of the balanced armature-type that has a significant moving mass. The reaction force from this moving mass is the source of the vibration. A modification of the balanced armature transducer has been developed that balances the vibration of its internal parts in a way that significantly reduces the vibration force transmitted outside of the receiver case. This transducer design concept, and some of its early prototype test data will be shown. The data indicate that it should be possible to manufacture transducers that generate less vibration than equivalent present models by 15-30 dB.
NASA Astrophysics Data System (ADS)
Slattery, Stuart R.
2016-02-01
In this paper we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. These scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
NASA Technical Reports Server (NTRS)
Holliday, Ezekiel S. (Inventor)
2014-01-01
Vibrations of a principal machine are reduced at the fundamental and harmonic frequencies by driving the drive motor of an active balancer with balancing signals at the fundamental and selected harmonics. Vibrations are sensed to provide a signal representing the mechanical vibrations. A balancing signal generator for the fundamental and for each selected harmonic processes the sensed vibration signal with adaptive filter algorithms of adaptive filters for each frequency to generate a balancing signal for each frequency. Reference inputs for each frequency are applied to the adaptive filter algorithms of each balancing signal generator at the frequency assigned to the generator. The harmonic balancing signals for all of the frequencies are summed and applied to drive the drive motor. The harmonic balancing signals drive the drive motor with a drive voltage component in opposition to the vibration at each frequency.
NASA Technical Reports Server (NTRS)
Malcolm, G. N.
1981-01-01
Two wind tunnel techniques for determining part of the aerodynamic information required to describe the dynamic bahavior of various types of vehicles in flight are described. Force and moment measurements are determined with a rotary-balance apparatus in a coning motion and with a Magnus balance in a high-speed spinning motion. Coning motion is pertinent to both aircraft and missiles, and spinning is important for spin stabilized missiles. Basic principles of both techniques are described, and specific examples of each type of apparatus are presented. Typical experimental results are also discussed.
The Balanced Billing Cycle Vehicle Routing Problem
Groer, Christopher S; Golden, Bruce; Edward, Wasil
2009-01-01
Utility companies typically send their meter readers out each day of the billing cycle in order to determine each customer s usage for the period. Customer churn requires the utility company to periodically remove some customer locations from its meter-reading routes. On the other hand, the addition of new customers and locations requires the utility company to add newstops to the existing routes. A utility that does not adjust its meter-reading routes over time can find itself with inefficient routes and, subsequently, higher meter-reading costs. Furthermore, the utility can end up with certain billing days that require substantially larger meter-reading resources than others. However, remedying this problem is not as simple as it may initially seem. Certain regulatory and customer service considerations can prevent the utility from shifting a customer s billing day by more than a few days in either direction. Thus, the problem of reducing the meterreading costs and balancing the workload can become quite difficult. We describe this Balanced Billing Cycle Vehicle Routing Problem in more detail and develop an algorithm for providing solutions to a slightly simplified version of the problem. Our algorithm uses a combination of heuristics and integer programming via a three-stage algorithm. We discuss the performance of our procedure on a real-world data set.
Stochastic solution of population balance equations for reactor networks
Menz, William J.; Akroyd, Jethro; Kraft, Markus
2014-01-01
This work presents a sequential modular approach to solve a generic network of reactors with a population balance model using a stochastic numerical method. Full-coupling to the gas-phase is achieved through operator-splitting. The convergence of the stochastic particle algorithm in test networks is evaluated as a function of network size, recycle fraction and numerical parameters. These test cases are used to identify methods through which systematic and statistical error may be reduced, including by use of stochastic weighted algorithms. The optimal algorithm was subsequently used to solve a one-dimensional example of silicon nanoparticle synthesis using a multivariate particle model. This example demonstrated the power of stochastic methods in resolving particle structure by investigating the transient and spatial evolution of primary polydispersity, degree of sintering and TEM-style images. Highlights: •An algorithm is presented to solve reactor networks with a population balance model. •A stochastic method is used to solve the population balance equations. •The convergence and efficiency of the reported algorithms are evaluated. •The algorithm is applied to simulate silicon nanoparticle synthesis in a 1D reactor. •Particle structure is reported as a function of reactor length and time.
NASA LaRC Strain Gage Balance Design Concepts
NASA Technical Reports Server (NTRS)
Rhew, Ray D.
1999-01-01
The NASA Langley Research Center (LaRC) has been designing strain-gage balances for more than fifty years. These balances have been utilized in Langley's wind tunnels, which span over a wide variety of aerodynamic test regimes, as well as other ground based test facilities and in space flight applications. As a result, the designs encompass a large array of sizes, loads, and environmental effects. Currently Langley has more than 300 balances available for its researchers. This paper will focus on the design concepts for internal sting mounted strain-gage balances. However, these techniques can be applied to all force measurement design applications. Strain-gage balance concepts that have been developed over the years including material selection, sting, model interfaces, measuring, sections, fabrication, strain-gaging and calibration will be discussed.
Development of the NTF-117S Semi-Span Balance
NASA Technical Reports Server (NTRS)
Lynn, Keith C.
2010-01-01
A new high-capacity semi-span force and moment balance has recently been developed for use at the National Transonic Facility at the NASA Langley Research Center. This new semi-span balance provides the NTF a new measurement capability that will support testing of semi-span test models at transonic high-lift testing regimes. Future testing utilizing this new balance capability will include active circulation control and propulsion simulation testing of semi-span transonic wing models. The NTF has recently implemented a new highpressure air delivery station that will provide both high and low mass flow pressure lines that are routed out to the semi-span models via a set high/low pressure bellows that are indirectly linked to the metric end of the NTF-117S balance. A new check-load stand is currently being developed to provide the NTF with an in-house capability that will allow for performing check-loads on the NTF-117S balance in order to determine the pressure tare affects on the overall performance of the balance. An experimental design is being developed that will allow for experimentally assessing the static pressure tare affects on the balance performance.
Calibration Variable Selection and Natural Zero Determination for Semispan and Canard Balances
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
Independent calibration variables for the characterization of semispan and canard wind tunnel balances are discussed. It is shown that the variable selection for a semispan balance is determined by the location of the resultant normal and axial forces that act on the balance. These two forces are the first and second calibration variable. The pitching moment becomes the third calibration variable after the normal and axial forces are shifted to the pitch axis of the balance. Two geometric distances, i.e., the rolling and yawing moment arms, are the fourth and fifth calibration variable. They are traditionally substituted by corresponding moments to simplify the use of calibration data during a wind tunnel test. A canard balance is related to a semispan balance. It also only measures loads on one half of a lifting surface. However, the axial force and yawing moment are of no interest to users of a canard balance. Therefore, its calibration variable set is reduced to the normal force, pitching moment, and rolling moment. The combined load diagrams of the rolling and yawing moment for a semispan balance are discussed. They may be used to illustrate connections between the wind tunnel model geometry, the test section size, and the calibration load schedule. Then, methods are reviewed that may be used to obtain the natural zeros of a semispan or canard balance. In addition, characteristics of three semispan balance calibration rigs are discussed. Finally, basic requirements for a full characterization of a semispan balance are reviewed.
ERIC Educational Resources Information Center
Our Children, 1997
1997-01-01
Changes in the workplace that would provide flexibility for working parents are slowly developing and receiving government, business, and societal attention. A sidebar, "Mother, Professional, Volunteer: One Woman's Balancing Act," presents an account of how one woman rearranged her professional life to enable her to do full-time parenting. (SM)
Maintaining an Environmental Balance
ERIC Educational Resources Information Center
Environmental Science and Technology, 1976
1976-01-01
A recent conference of the National Environmental Development Association focused on the concepts of environment, energy and economy and underscored the necessity for balancing the critical needs embodied in these issues. Topics discussed included: nuclear energy and wastes, water pollution control, federal regulations, environmental technology…
ERIC Educational Resources Information Center
Savoy, L. G.
1988-01-01
Describes a study of students' ability to balance equations. Answers to a test on this topic were analyzed to determine the level of understanding and processes used by the students. Presented is a method to teach this skill to high school chemistry students. (CW)
NASA Astrophysics Data System (ADS)
Kułakowski, Krzysztof; Gawroński, Przemysław; Gronek, Piotr
The Heider balance (HB) is investigated in a fully connected graph of N nodes. The links are described by a real symmetric array r (i, j), i, j =1, …, N. In a social group, nodes represent group members and links represent relations between them, positive (friendly) or negative (hostile). At the balanced state, r (i, j) r (j, k) r (k, i) > 0 for all the triads (i, j, k). As follows from the structure theorem of Cartwright and Harary, at this state the group is divided into two subgroups, with friendly internal relations and hostile relations between the subgroups. Here the system dynamics is proposed to be determined by a set of differential equations, ˙ r =rḑot r. The form of equations guarantees that once HB is reached, it persists. Also, for N =3 the dynamics reproduces properly the tendency of the system to the balanced state. The equations are solved numerically. Initially, r (i, j) are random numbers distributed around zero with a symmetric uniform distribution of unit width. Calculations up to N =500 show that HB is always reached. Time τ(N) to get the balanced state varies with the system size N as N-1/2. The spectrum of relations, initially narrow, gets very wide near HB. This means that the relations are strongly polarized. In our calculations, the relations are limited to a given range around zero. With this limitation, our results can be helpful in an interpretation of some statistical data.
ERIC Educational Resources Information Center
Bray, George A.
1985-01-01
Explains relationships between energy intake and expenditure focusing on the cellular, chemical and neural mechanisms involved in regulation of energy balance. Information is referenced specifically to conditions of obesity. (Physicians may earn continuing education credit by completing an appended test). (ML)
ERIC Educational Resources Information Center
Lewis, Tamika; Mobley, Mary; Huttenlock, Daniel
2013-01-01
It's the season for the job hunt, whether one is looking for their first job or taking the next step along their career path. This article presents first-person accounts to see how teachers balance the rewards and challenges of working in different types of schools. Tamica Lewis, a third-grade teacher, states that faculty at her school is…
Toward Balance in Translation.
ERIC Educational Resources Information Center
Costello, Nancy A.
A study compared translations of biblical passages into different languages in Papua New Guinea. The study looked for evidence of balance between literal and free interpretation in translation style in the gospel of Mark, which is narrative and didactic material, in 12 languages, and the mainly hortatory genre in translations of 4 epistles:…