Load-balancing algorithms for climate models
NASA Astrophysics Data System (ADS)
Foster, I. T.; Toonen, B. R.
Implementations of climate models on scalable parallel computer systems can suffer from load imbalances due to temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we describe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the community climate model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers.
Adaptive Load-Balancing Algorithms Using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
In a distributed-computing environment, it is important to ensure that the processor workloads are adequately balanced. Among numerous load-balancing algorithms, a unique approach due to Dam and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three novel SBN-based load-balancing algorithms, and implement them on an SP2. A thorough experimental study with Poisson-distributed synthetic loads demonstrates that these algorithms are very effective in balancing system load while minimizing processor idle time. They also compare favorably with several other existing load-balancing techniques. Additional experiments performed with real data demonstrate that the SBN approach is effective in adaptive computational science and engineering applications where dynamic load balancing is extremely crucial.
Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.
Dynamic load balance scheme for the DSMC algorithm
Li, Jin; Geng, Xiangren; Jiang, Dingwu; Chen, Jianqiang
2014-12-09
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, the total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
Load Balancing Scientific Applications
Pearce, Olga Tkachyshyn
2014-12-01
The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one at the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.
A Multiconstrained Grid Scheduling Algorithm with Load Balancing and Fault Tolerance.
Keerthika, P; Suresh, P
2015-01-01
Grid environment consists of millions of dynamic and heterogeneous resources. A grid environment which deals with computing resources is computational grid and is meant for applications that involve larger computations. A scheduling algorithm is said to be efficient if and only if it performs better resource allocation even in case of resource failure. Allocation of resources is a tedious issue since it has to consider several requirements such as system load, processing cost and time, user's deadline, and resource failure. This work attempts to design a resource allocation algorithm which is budget constrained and also targets load balancing, fault tolerance, and user satisfaction by considering the above requirements. The proposed Multiconstrained Load Balancing Fault Tolerant algorithm (MLFT) reduces the schedule makespan, schedule cost, and task failure rate and improves resource utilization. The proposed MLFT algorithm is evaluated using Gridsim toolkit and the results are compared with the recent algorithms which separately concentrate on all these factors. The comparison results ensure that the proposed algorithm works better than its counterparts.
A novel load-balanced fixed routing (LBFR) algorithm for wavelength routed optical networks
NASA Astrophysics Data System (ADS)
Shen, Gangxiang; Li, Yongcheng; Peng, Limei
2011-11-01
In the wavelength-routed optical transport networks, fixed shortest path routing is one of major lightpath service provisioning strategies, which shows simplicity in network control and operation. Specifically, once a shortest route is found for a node pair, the route is always used for any future lightpath service provisioning, which therefore does not require network control and management system to maintain any active network-wide link state database. On the other hand, the fixed shortest path routing strategy suffers from the disadvantage of unbalanced network traffic load distribution and network congestion because it keeps on employing the same fixed shortest route between each pair of nodes. To avoid the network congestion and meanwhile retain the operational simplicity, in this study we develop a Load-Balanced Fixed Routing (LBFR) algorithm. Through a training process based on a forecasted network traffic load matrix, the proposed algorithm finds a fixed (or few) route(s) for each node pair and employs the fixed route(s) for lightpath service provisioning. Different from the fixed shortest path routes between node pairs, these routes can well balance traffic load within the network when they are used for lightpath service provisioning. Compared to the traditional fixed shortest path routing algorithm, the LBFR algorithm can achieve much better lightpath blocking performance according to our simulation and analytical studies. Moreover, the performance improvement is more significant with the increase of network nodal degree.
Load Balancing and Data Locality in the Parallelization of the Fast Multipole Algorithm
NASA Astrophysics Data System (ADS)
Banicescu, Ioana
Scientific problems are often irregular, large and computationally intensive. Efficient parallel implementations of algorithms that are employed in finding solutions to these problems play an important role in the development of science. This thesis studies the parallelization of a certain class of irregular scientific problems, the N -body problem, using a classical hierarchical algorithm: the Fast Multipole Algorithm (FMA). Hierarchical N-body algorithms in general, and the FMA in particular, are amenable to parallel execution. However, performance gains are difficult to obtain, due to load imbalances that are primarily caused by the irregular distribution of bodies and of computation domains. Understanding application characteristics is essential for obtaining high performance implementations on parallel machines. After surveying the available parallelism in the FMA, we address the problem of exploiting this parallelism with partitioning and scheduling techniques that optimally map it onto a parallel machine, the KSR1. The KSR1 is a parallel shared address-space machine with a hierarchical cache-only architecture. The tension between maintaining data locality and balancing processor loads requires a scheduling scheme that combines static techniques (that exploit data locality) with dynamic techniques (that improve load balancing). An effective combined scheduling scheme that balances processor loads and maintains locality, by exploiting self-similarity properties of fractals, is Fractiling. Fractiling is based on a probabilistic analysis. It thus accommodates load imbalances caused by predictable events (such as irregular data) as well as unpredictable events (such as data access latency). Fractiling adapts to algorithmic and system induced load imbalances while maximizing data locality. We used Fractiling to schedule a parallel FMA on the KSR1. Our parallel 2-d and 3-d FMA implementations were run using uniform and nonuniform data set distributions under a
Multidimensional spectral load balancing
Hendrickson, B.; Leland, R.
1993-01-01
We describe an algorithm for the static load balancing of scientific computations that generalizes and improves upon spectral bisection. Through a novel use of multiple eigenvectors, our new spectral algorithm can divide a computation into 4 or 8 pieces at once. These multidimensional spectral partitioning algorithms generate balanced partitions that have lower communication overhead and are less expensive to compute than those produced by spectral bisection. In addition, they automatically work to minimize message contention on a hypercube or mesh architecture. These spectral partitions are further improved by a multidimensional generalization of the Kernighan-Lin graph partitioning algorithm. Results on several computational grids are given and compared with other popular methods.
A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
Zhou, Xiuze; Lin, Fan; Yang, Lvqing; Nie, Jing; Tan, Qian; Zeng, Wenhua; Zhang, Nian
2016-01-01
With the continuous expansion of the cloud computing platform scale and rapid growth of users and applications, how to efficiently use system resources to improve the overall performance of cloud computing has become a crucial issue. To address this issue, this paper proposes a method that uses an analytic hierarchy process group decision (AHPGD) to evaluate the load state of server nodes. Training was carried out by using a hybrid hierarchical genetic algorithm (HHGA) for optimizing a radial basis function neural network (RBFNN). The AHPGD makes the aggregative indicator of virtual machines in cloud, and become input parameters of predicted RBFNN. Also, this paper proposes a new dynamic load balancing scheduling algorithm combined with a weighted round-robin algorithm, which uses the predictive periodical load value of nodes based on AHPPGD and RBFNN optimized by HHGA, then calculates the corresponding weight values of nodes and makes constant updates. Meanwhile, it keeps the advantages and avoids the shortcomings of static weighted round-robin algorithm.
Dynamic localized load balancing
NASA Astrophysics Data System (ADS)
Balandin, Sergey I.; Heiner, Andreas P.
2003-08-01
Traditionally dynamic load balancing is applied in resource-reserved connection-oriented networks with a large degree of managed control. Load balancing in connectionless networks is rather rudimentary and is either static or requires network-wide load information. This paper presents a fully automated, traffic driven dynamic load balancing mechanism that uses local load information. The proposed mechanism is easily deployed in a multi-vendor environment in which only a subset of routers supports the function. The Dynamic Localized Load Balancing (DLLB) mechanism distributes traffic based on two sets of weights. The first set is fixed and is inverse proportional to the path cost, typically the sum of reciprocal bandwidths along the path. The second weight reflects the utilization of the link to the first next hop along the path, and is therefore variable. The ratio of static weights defines the ideal load distribution, the ratio of variable weights the node-local load distribution estimate. By minimizing the difference between variable and fixed ratios the traffic distribution, with the available node-local knowledge, is optimal. The above mechanism significantly increases throughput and decreases delay from a network-wide perspective. Optionally the variable weight can include load information of nodes downstream to prevent congestion on those nodes. The latter function further improves network performance, and is easily implemented on top of the standard OSPF signaling. The mechanism does not require many node resources and can be implemented on existing router platforms.
Multidimensional spectral load balancing
Hendrickson, Bruce A.; Leland, Robert W.
1996-12-24
A method of and apparatus for graph partitioning involving the use of a plurality of eigenvectors of the Laplacian matrix of the graph of the problem for which load balancing is desired. The invention is particularly useful for optimizing parallel computer processing of a problem and for minimizing total pathway lengths of integrated circuits in the design stage.
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
Control Allocation with Load Balancing
NASA Technical Reports Server (NTRS)
Bodson, Marc; Frost, Susan A.
2009-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.
Balancing Loads Among Parallel Data Processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas
1990-01-01
Heuristic algorithm minimizes amount of memory used by multiprocessor system. Distributes load of many identical, short computations among multiple parallel digital data processors, each of which has its own (local) memory. Each processor operates on distinct and independent set of data in larger shared memory. As integral part of load-balancing scheme, total amount of space used in shared memory minimized. Possible applications include artificial neural networks or image processors for which "pipeline" and vector methods of load balancing inappropriate.
Dynamic load balancing of applications
Wheat, S.R.
1997-05-13
An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers is disclosed. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated. 13 figs.
Dynamic load balancing of applications
Wheat, Stephen R.
1997-01-01
An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated.
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
Design of dynamic load-balancing tools for parallel applications
Devine, K.D.; Hendrickson, B.A.; Boman, E.G.; St. John, M.; Vaughan, C.T.
2000-01-03
The design of general-purpose dynamic load-balancing tools for parallel applications is more challenging than the design of static partitioning tools. Both algorithmic and software engineering issues arise. The authors have addressed many of these issues in the design of the Zoltan dynamic load-balancing library. Zoltan has an object-oriented interface that makes it easy to use and provides separation between the application and the load-balancing algorithms. It contains a suite of dynamic load-balancing algorithms, including both geometric and graph-based algorithms. Its design makes it valuable both as a partitioning tool for a variety of applications and as a research test-bed for new algorithmic development. In this paper, the authors describe Zoltan's design and demonstrate its use in an unstructured-mesh finite element application.
Static load balancing for CFD distributed simulations
Chronopoulos, A T; Grosu, D; Wissink, A; Benche, M
2001-01-26
The cost/performance ratio of networks of workstations has been constantly improving. This trend is expected to continue in the near future. The aggregate peak rate of such systems often matches or exceeds the peak rate offered by the fastest parallel computers. This has motivated research towards using a network of computers, interconnected via a fast network (cluster system) or a simple Local Area Network (LAN) (distributed system), for high performance concurrent computations. Some of the important research issues arise such as (1) Optimal problem partitioning and virtual interconnection topology mapping; (2) Optimal execution scheduling and load balancing. CFD codes have been efficiently implemented on homogeneous parallel systems in the past. In particular, the helicopter aerodynamics CFD code TURNS has been implemented with MPI on the IBM SP with parallel relaxation and Krylov iterative methods used in place of more traditional recursive algorithms to enhance performance. In this implementation the space domain is divided into equal subdomain which are mapped to the processors. We consider the implementation of TURNS on a LAN of heterogeneous workstations. In order to deal with the problem of load balancing due to the different processor speeds we propose a suboptimal algorithm of dividing the space domain into unequal subdomains and assign them to the different computers. The algorithm can apply to other CFD applications. We used our algorithm to schedule TURNS on a network of workstations and obtained significantly better results.
Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2014-01-01
An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.
Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations
O'Brien, M; Taylor, J; Procassini, R
2004-12-22
The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations.
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
Improving load balance with flexibly assignable tasks
Pinar, Ali; Hendrickson, Bruce
2003-09-09
In many applications of parallel computing, distribution ofthe data unambiguously implies distribution of work among processors. Butthere are exceptions where some tasks can be assigned to one of severalprocessors without altering the total volume of communication. In thispaper, we study the problem of exploiting this flexibility in assignmentof tasks to improve load balance. We first model the problem in terms ofnetwork flow and use combinatorial techniques for its solution. Ourparametric search algorithms use maximum flow algorithms for probing on acandidate optimal solution value. We describe two algorithms to solve theassignment problem with \\logW_T and vbar P vbar probe calls, w here W_Tand vbar P vbar, respectively, denote the total workload and number ofproce ssors. We also define augmenting paths and cuts for this problem,and show that anyalgorithm based on augmenting paths can be used to findan optimal solution for the task assignment problem. We then consideracontinuous version of the problem, and formulate it as a linearlyconstrained optimization problem, i.e., \\min\\|Ax\\|_\\infty,\\; {\\rms.t.}\\;Bx=d. To avoid solving an intractable \\infty-norm optimization problem,we show that in this case minimizing the 2-norm is sufficient to minimizethe \\infty-norm, which reduces the problem to the well-studiedlinearly-constrained least squares problem. The continuous version of theproblem has the advantage of being easily amenable to parallelization.Our experiments with molecular dynamics and overlapped domaindecomposition applications proved the effectiveness of our methods withsignificant improvements in load balance. We also discuss how ourtechniques can be enhanced for heterogeneous systems.
Load Balancing in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Zhu, Yingwu
In this chapter we start by addressing the importance and necessity of load balancing in structured P2P networks, due to three main reasons. First, structured P2P networks assume uniform peer capacities while peer capacities are heterogeneous in deployed P2P networks. Second, resorting to pseudo-uniformity of the hash function used to generate node IDs and data item keys leads to imbalanced overlay address space and item distribution. Lastly, placement of data items cannot be randomized in some applications (e.g., range searching). We then present an overview of load aggregation and dissemination techniques that are required by many load balancing algorithms. Two techniques are discussed including tree structure-based approach and gossip-based approach. They make different tradeoffs between estimate/aggregate accuracy and failure resilience. To address the issue of load imbalance, three main solutions are described: virtual server-based approach, power of two choices, and address-space and item balancing. While different in their designs, they all aim to improve balance on the address space and data item distribution. As a case study, the chapter discusses a virtual server-based load balancing algorithm that strives to ensure fair load distribution among nodes and minimize load balancing cost in bandwidth. Finally, the chapter concludes with future research and a summary.
NASA Technical Reports Server (NTRS)
Hailperin, M.
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.
An Evaluation of the HVAC Load Potential for Providing Load Balancing Service
Lu, Ning
2012-09-30
This paper investigates the potential of providing aggregated intra-hour load balancing services using heating, ventilating, and air-conditioning (HVAC) systems. A direct-load control algorithm is presented. A temperature-priority-list method is used to dispatch the HVAC loads optimally to maintain consumer-desired indoor temperatures and load diversity. Realistic intra-hour load balancing signals were used to evaluate the operational characteristics of the HVAC load under different outdoor temperature profiles and different indoor temperature settings. The number of HVAC units needed is also investigated. Modeling results suggest that the number of HVACs needed to provide a {+-}1-MW load balancing service 24 hours a day varies significantly with baseline settings, high and low temperature settings, and the outdoor temperatures. The results demonstrate that the intra-hour load balancing service provided by HVAC loads meet the performance requirements and can become a major source of revenue for load-serving entities where the smart grid infrastructure enables direct load control over the HAVC loads.
Internet traffic load balancing using dynamic hashing with flow volume
NASA Astrophysics Data System (ADS)
Jo, Ju-Yeon; Kim, Yoohwan; Chao, H. Jonathan; Merat, Francis L.
2002-07-01
Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.
Performance tradeoffs in static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.
A comparative analysis of static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf; Bokhari, Shahid H.; Saltz, Joel H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but suboptimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the three strategies.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrence Livermore National Laboratory. (authors)
Valiant load-balanced robust routing under hose model for WDM mesh networks
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng
2006-09-01
In this paper, we propose Valiant Load-Balanced robust routing scheme for WDM mesh networks under the model of polyhedral uncertainty (i.e., hose model), and the proposed routing scheme is implemented with traffic grooming approach. Our Objective is to maximize the hose model throughput. A mathematic formulation of Valiant Load-Balanced robust routing is presented and three fast heuristic algorithms are also proposed. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimizing hop first) is proposed. We compare the three heuristic algorithms with the VPN tree under the hose model. Finally we demonstrate in the simulation results that MHF with Valiant Load-Balanced robust routing scheme outperforms the traditional traffic-grooming algorithm in terms of the throughput for the uniform/non-uniform traffic matrix under the hose model.
Simulation model of load balancing in distributed computing systems
NASA Astrophysics Data System (ADS)
Botygin, I. A.; Popov, V. N.; Frolov, S. G.
2017-02-01
The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.
Scalable load-balance measurement for SPMD codes
Gamblin, T; de Supinski, B R; Schulz, M; Fowler, R; Reed, D
2008-08-05
Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transform and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.
Parallel tetrahedral mesh adaptation with dynamic load balancing
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
2000-06-28
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.
Load balancing fictions, falsehoods and fallacies
HENDRICKSON,BRUCE A.
2000-05-30
Effective use of a parallel computer requires that a calculation be carefully divided among the processors. This load balancing problem appears in many guises and has been a fervent area of research for the past decade or more. Although great progress has been made, and useful software tools developed, a number of challenges remain. It is the conviction of the author that these challenges will be easier to address if programmers first come to terms with some significant shortcomings in their current perspectives. This paper tries to identify several areas in which the prevailing point of view is either mistaken or insufficient. The goal is to motivate new ideas and directions for this important field.
Performance Analysis and Portability of the PLUM Load Balancing System
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1998-01-01
The ability to dynamically adapt an unstructured mesh is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive numerical computations in a message-passing environment. PLUM requires that all data be globally redistributed after each mesh adaption to achieve load balance. We present an algorithm for minimizing this remapping overhead by guaranteeing an optimal processor reassignment. We also show that the data redistribution cost can be significantly reduced by applying our heuristic processor reassignment algorithm to the default mapping of the parallel partitioner. Portability is examined by comparing performance on a SP2, an Origin2000, and a T3E. Results show that PLUM can be successfully ported to different platforms without any code modifications.
Evaluating Zoltan for Static Load Balancing on BlueGene Architectures
Kumfert, G
2007-11-15
The purpose of this TechBase was to evaluate the Zoltan load-balancing library from Sandia National Laboratories as a possible replacement for ParMetis, which had been the load balancer of choice for nearly a decade but does not scale to the full 64,000 processors of BlueGene/L. This evaluation was successful in producing a clear result, but the result was unfortunately negative. Although Zoltan presents a collection load-balancing algorithms, none were able to meet or exceed the combined scalability and quality of ParMetis on representative datasets.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
NASA Astrophysics Data System (ADS)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.
Parallel Processing of Adaptive Meshes with Load Balancing
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.
Energy-balanced algorithm for RFID estimation
NASA Astrophysics Data System (ADS)
Zhao, Jumin; Wang, Fangyuan; Li, Dengao; Yan, Lijuan
2016-10-01
RFID has been widely used in various commercial applications, ranging from inventory control, supply chain management to object tracking. It is necessary for us to estimate the number of RFID tags deployed in a large area periodically and automatically. Most of the prior works use passive tags to estimate and focus on designing time-efficient algorithms that can estimate tens of thousands of tags in seconds. But for a RFID reader to access tags in a large area, active tags are likely to be used due to their longer operational ranges. But these tags use their own battery as energy supplier. Hence, conserving energy for active tags becomes critical. Some prior works have studied how to reduce energy expenditure of a RFID reader when it reads tags IDs. In this paper, we study how to reduce the amount of energy consumed by active tags during the process of estimating the number of tags in a system and make the energy every tag consumed balanced approximately. We design energy-balanced estimation algorithm that can achieve our goal we mentioned above.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Dynamic Load Balancing for Adaptive Meshes using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often dynamic in the sense that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing inter-processor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view across processors. In this paper, we compare a novel load balancer that utilizes symmetric broadcast networks (SBN) to a successful global load balancing environment (PLUM) created to handle adaptive unstructured applications. Our experimental results on the IBM SP2 demonstrate that performance of the proposed SBN load balancer is comparable to results achieved under PLUM.
Neural Network Algorithm for Particle Loading
J. L. V. Lewandowski
2003-04-25
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.
Static and dynamic load-balancing strategies for parallel reservoir simulation
Anguille, L.; Killough, J.E.; Li, T.M.C.; Toepfer, J.L.
1995-12-31
Accurate simulation of the complex phenomena that occur in flow in porous media can tax even the most powerful serial computers. Emergence of new parallel computer architectures as a future efficient tool in reservoir simulation may overcome this difficulty. Unfortunately, major problems remain to be solved before using parallel computers commercially: production serial programs must be rewritten to be efficient in parallel environments and load balancing methods must be explored to evenly distribute the workload on each processor during the simulation. This study implements both a static load-balancing algorithm and a receiver-initiated dynamic load-sharing algorithm to achieve high parallel efficiencies on both the IBM SP2 and Intel IPSC/860 parallel computers. Significant speedup improvement was recorded for both methods. Further optimization of these algorithms yielded a technique with efficiencies as high as 90% and 70% on 8 and 32 nodes, respectively. The increased performance was the result of the minimization of message-passing overhead.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.
A High Performance Load Balance Strategy for Real-Time Multicore Systems
Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing
2014-01-01
Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382
PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.
A dynamic ant colony optimization for load balancing in MRN/MLN
NASA Astrophysics Data System (ADS)
Lu, Le; Huang, Shanguo; Gu, Wanyi
2011-12-01
Ant Colony Optimization (ACO) is a popular research field these years. Ants choose paths where pheromone concentration is higher and modify the environment they visited. However, in the context of multi-service in multi-level and multi-domain optical network, the capacity of inter-domain links is limited. Congestion may be occurred at inter-domain links. In this paper, ant colony optimization algorithm based on load balancing is proposed. Ants follow paths not just depend on pheromone alone, we also take available resources on the link as a factor too. Simulations show the proposed method could reduce the traffic blocking probability, and realize load balancing within the network.
Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines
NASA Technical Reports Server (NTRS)
1999-01-01
Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.
Load Balancing Unstructured Adaptive Grids for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1996-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.
BALANCING THE LOAD: A VORONOI BASED SCHEME FOR PARALLEL COMPUTATIONS
Steinberg, Elad; Yalinewich, Almog; Sari, Re'em; Duffell, Paul
2015-01-01
One of the key issues when running a simulation on multiple CPUs is maintaining a proper load balance throughout the run and minimizing communications between CPUs. We propose a novel method of utilizing a Voronoi diagram to achieve a nearly perfect load balance without the need of any global redistributions of data. As a show case, we implement our method in RICH, a two-dimensional moving mesh hydrodynamical code, but it can be extended trivially to other codes in two or three dimensions. Our tests show that this method is indeed efficient and can be used in a large variety of existing hydrodynamical codes.
A novel load balancing method for hierarchical federation simulation system
NASA Astrophysics Data System (ADS)
Bin, Xiao; Xiao, Tian-yuan
2013-07-01
In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.
Dynamic Load Balancing Strategies for Parallel Reacting Flow Simulations
NASA Astrophysics Data System (ADS)
Pisciuneri, Patrick; Meneses, Esteban; Givi, Peyman
2014-11-01
Load balancing in parallel computing aims at distributing the work as evenly as possible among the processors. This is a critical issue in the performance of parallel, time accurate, flow simulators. The constraint of time accuracy requires that all processes must be finished with their calculation for a given time step before any process can begin calculation of the next time step. Thus, an irregularly balanced compute load will result in idle time for many processes for each iteration and thus increased walltimes for calculations. Two existing, dynamic load balancing approaches are applied to the simplified case of a partially stirred reactor for methane combustion. The first is Zoltan, a parallel partitioning, load balancing, and data management library developed at the Sandia National Laboratories. The second is Charm++, which is its own machine independent parallel programming system developed at the University of Illinois at Urbana-Champaign. The performance of these two approaches is compared, and the prospects for their application to full 3D, reacting flow solvers is assessed.
Work Stealing and Persistence-based Load Balancers for Iterative Overdecomposed Applications
Lifflander, Jonathan; Krishnamoorthy, Sriram; Kale, Laxmikant
2012-06-18
Applications often involve iterative execution of identical or slowly evolving calculations. Such applications require good initial load balance coupled with efficient periodic rebalancing. In this paper, we consider the design and evaluation of two distinct approaches to addressing this challenge: persistence-based load balancing and work stealing. The work to be performed is overdecomposed into tasks, enabling automatic rebalancing by the middleware. We present a hierarchical persistence-based rebalancing algorithm that performs localized incremental rebalancing. We also present an active-message-based retentive work stealing algorithm optimized for iterative applications on distributed memory machines. These are shown to incur low overheads and achieve over 90% efficiency on 76,800 cores.
Heuristic procedure for the assembly line balancing problem with postural load smoothness.
Jaturanonda, Chorkaew; Nanthavanij, Suebsak; Das, Sanchoy K
2013-01-01
This paper presents a heuristic procedure for assigning assembly tasks to workstations where both productivity and ergonomics issues are considered concurrently. The procedure uses Kilbridge and Wester's algorithm to obtain an initial task-workstation assignment solution which minimizes the balance delay of an assembly line. A task reassignment algorithm was applied to improve the initial solution by exchanging assembly tasks, which smooth postural load among workers, between workstations. A composite index of variation was used to measure the effectiveness of the task-workstation assignment solution. On the basis of clothes assembling, it was found that the task-workstation assignment solution with a minimum composite index of variation can be obtained with relatively equal weights in balance delay and postural load.
Fuzzy Pool Balance: An algorithm to achieve a two dimensional balance in distribute storage systems
NASA Astrophysics Data System (ADS)
Wu, Wenjing; Chen, Gang
2014-06-01
The limitation of scheduling modules and the gradual addition of disk pools in distributed storage systems often result in imbalances among their disk pools in terms of both disk usage and file count. This can cause various problems to the storage system such as single point of failure, low system throughput and imbalanced resource utilization and system loads. An algorithm named Fuzzy Pool Balance (FPB) is proposed here to solve this problem. The input of FPB is the current file distribution among disk pools and the output is a file migration plan indicating what files are to be migrated to which pools. FPB uses an array to classify the files by their sizes. The file classification array is dynamically calculated with a defined threshold named Tmax that defines the allowed pool disk usage deviations. File classification is the basis of file migration. FPB also defines the Immigration Pool (IP) and Emigration Pool (EP) according to the pool disk usage and File Quantity Ratio (FQR) that indicates the percentage of each category of files in each disk pool, so files with higher FQR in an EP will be migrated to IP(s) with a lower FQR of this file category. To verify this algorithm, we implemented FPB on an ATLAS Tier2 dCache production system. The results show that FPB can achieve a very good balance in both free space and file counts, and adjusting the threshold value Tmax and the correction factor to the average FQR can achieve a tradeoff between free space and file count.
Channel allocation and load balancing in totally mobile wireless networks
NASA Astrophysics Data System (ADS)
Cui, Wei; Bassiouni, Mostafa A.
2000-07-01
Previous studies on totally mobile wireless networks (TMWN) have been limited to non-hierarchical architectures. In this paper, we study a two-tier cellular architecture for TMWN. Under the constraints of equal power consumption, the two tier system achieves improvement over the one-tier system, especially at light and medium load levels. Performance tests have also shown that handoff prioritization can be achieved by restricting the use of the umbrella channels. Further improvement for the two-tier system was obtained by load balancing strategies with respect to the allocation of channels to the different cells.
Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1997-01-01
Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
Monitoring dynamic loads on wind tunnel force balances
NASA Technical Reports Server (NTRS)
Ferris, Alice T.; White, William C.
1989-01-01
Two devices have been developed at NASA Langley to monitor the dynamic loads incurred during wind-tunnel testing. The Balance Dynamic Display Unit (BDDU), displays and monitors the combined static and dynamic forces and moments in the orthogonal axes. The Balance Critical Point Analyzer scales and sums each normalized signal from the BDDU to obtain combined dynamic and static signals that represent the dynamic loads at predefined high-stress points. The display of each instrument is a multiplex of six analog signals in a way that each channel is displayed sequentially as one-sixth of the horizontal axis on a single oscilloscope trace. Thus this display format permits the operator to quickly and easily monitor the combined static and dynamic level of up to six channels at the same time.
Data Partitioning and Load Balancing in Parallel Disk Systems
NASA Technical Reports Server (NTRS)
Scheuermann, Peter; Weikum, Gerhard; Zabback, Peter
1997-01-01
Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible waves, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces.
Dual strain gage balance system for measuring light loads
NASA Technical Reports Server (NTRS)
Roberts, Paul W. (Inventor)
1991-01-01
A dual strain gage balance system for measuring normal and axial forces and pitching moment of a metric airfoil model imparted by aerodynamic loads applied to the airfoil model during wind tunnel testing includes a pair of non-metric panels being rigidly connected to and extending towards each other from opposite sides of the wind tunnel, and a pair of strain gage balances, each connected to one of the non-metric panels and to one of the opposite ends of the metric airfoil model for mounting the metric airfoil model between the pair of non-metric panels. Each strain gage balance has a first measuring section for mounting a first strain gage bridge for measuring normal force and pitching moment and a second measuring section for mounting a second strain gage bridge for measuring axial force.
Selective randomized load balancing and mesh networks with changing demands
NASA Astrophysics Data System (ADS)
Shepherd, F. B.; Winzer, P. J.
2006-05-01
We consider the problem of building cost-effective networks that are robust to dynamic changes in demand patterns. We compare several architectures using demand-oblivious routing strategies. Traditional approaches include single-hop architectures based on a (static or dynamic) circuit-switched core infrastructure and multihop (packet-switched) architectures based on point-to-point circuits in the core. To address demand uncertainty, we seek minimum cost networks that can carry the class of hose demand matrices. Apart from shortest-path routing, Valiant's randomized load balancing (RLB), and virtual private network (VPN) tree routing, we propose a third, highly attractive approach: selective randomized load balancing (SRLB). This is a blend of dual-hop hub routing and randomized load balancing that combines the advantages of both architectures in terms of network cost, delay, and delay jitter. In particular, we give empirical analyses for the cost (in terms of transport and switching equipment) for the discussed architectures, based on three representative carrier networks. Of these three networks, SRLB maintains the resilience properties of RLB while achieving significant cost reduction over all other architectures, including RLB and multihop Internet protocol/multiprotocol label switching (IP/MPLS) networks using VPN-tree routing.
Economic load dispatch using improved gravitational search algorithm
NASA Astrophysics Data System (ADS)
Huang, Yu; Wang, Jia-rong; Guo, Feng
2016-03-01
This paper presents an improved gravitational search algorithm(IGSA) to solve the economic load dispatch(ELD) problem. In order to avoid the local optimum phenomenon, mutation processing is applied to the GSA. The IGSA is applied to solve the economic load dispatch problems with the valve point effects, which has 13 generators and a load demand of 2520 MW. Calculation results show that the algorithm in this paper can deal with the ELD problems with high stability.
An efficient algorithm using matrix methods to solve wind tunnel force-balance equations
NASA Technical Reports Server (NTRS)
Smith, D. L.
1972-01-01
An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.
Population-based learning of load balancing policies for a distributed computer system
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; Wah, Benjamin W.
1993-01-01
Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.
Priority-rotating DBA with adaptive load balance for reconfigurable WDM/TDM PON
NASA Astrophysics Data System (ADS)
Xia, Weidong; Gan, Chaoqin; Xie, Weilun; Ni, Cuiping
2015-12-01
To the wavelength-division multiplexing/time-division multiplexing passive optical network (WDM/TDM PON) architecture that implements wavelength sharing and traffic redirection, a priority-rotating dynamic bandwidth allocation (DBA) algorithm is proposed in this paper. The priority of each ONU is set and rotated to meet the bandwidth demand and guarantee the fairness among optical network units (ONUs). The bandwidth allocation for priority queues is employed to avoid bandwidth monopolization and over-allocation. The bandwidth allocation for high-loaded situation and redirected traffic are discussed to achieve adaptive load balance over wavelengths and among ONUs. The simulation results show a good performance of the proposed algorithm in throughput rate and average packet delay.
NASA Technical Reports Server (NTRS)
Richardson, J.; Labbe, M.; Belala, Y.; Leduc, Vincent
1994-01-01
The requirement for improving aircraft utilization and responsiveness in airlift operations has been recognized for quite some time by the Canadian Forces. To date, the utilization of scarce airlift resources has been planned mainly through the employment of manpower-intensive manual methods in combination with the expertise of highly qualified personnel. In this paper, we address the problem of facilitating the load planning process for military aircraft cargo planes through the development of a computer-based system. We introduce TALBAS (Transport Aircraft Loading and BAlancing System), a knowledge-based system designed to assist personnel involved in preparing valid load plans for the C130 Hercules aircraft. The main features of this system which are accessible through a convivial graphical user interface, consists of the automatic generation of valid cargo arrangements given a list of items to be transported, the user-definition of load plans and the automatic validation of such load plans.
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Sherstnyova, A. I.; Botygin, I. A.; Galanova, N. Y.
2016-06-01
The results of the statistical model experiments research of various load balancing algorithms in distributed computing systems are presented. Software tools were developed. These tools, which allow to create a virtual infrastructure of distributed computing system in accordance with the intended objective of the research focused on multi-agent and multithreaded data processing were developed. A diagram of the control processing of requests from the terminal devices, providing an effective dynamic horizontal scaling of computing power at peak loads, is proposed.
Biologically inspired load balancing mechanism in neocortical competitive learning
Tal, Amir; Peled, Noam; Siegelmann, Hava T.
2014-01-01
A unique delayed self-inhibitory pathway mediated by layer 5 Martinotti Cells was studied in a biologically inspired neural network simulation. Inclusion of this pathway along with layer 5 basket cell lateral inhibition caused balanced competitive learning, which led to the formation of neuronal clusters as were indeed reported in the same region. Martinotti pathway proves to act as a learning “conscience,” causing overly successful regions in the network to restrict themselves and let others fire. It thus spreads connectivity more evenly throughout the net and solves the “dead unit” problem of clustering algorithms in a local and biologically plausible manner. PMID:24653679
Balanced 0, + or - Matrices. Part 2. Recognition Algorithm
1994-01-22
Matrices MAY 101994 Part II: Recognition Algorithm D Michele ConfortlI G6rard Cornu6j~ls2 Ajai Kapoor Krisina Vuskovi 2 January 22, 1994 Dipartimento...di Matematica Pura ed Applicata Universiti di Padova, Via Belzoni 7, 94-13892 35131 Padova, Italy I IIII In II ii I l1i III Graduate School of...for balanced 0, ± matrices . This algorithm is based on a decomposition theorem proved in a companion paper. Acce166 ýr7 NTIS CRA& D’BC TAB L 1 U
Estimating nutrient loadings using chemical mass balance approach.
Jain, C K; Singhal, D C; Sharma, M K
2007-11-01
The river Hindon is one of the important tributaries of river Yamuna in western Uttar Pradesh (India) and carries pollution loads from various municipal and industrial units and surrounding agricultural areas. The main sources of pollution in the river include municipal wastes from Saharanpur, Muzaffarnagar and Ghaziabad urban areas and industrial effluents of sugar, pulp and paper, distilleries and other miscellaneous industries through tributaries as well as direct inputs. In this paper, chemical mass balance approach has been used to assess the contribution from non-point sources of pollution to the river. The river system has been divided into three stretches depending on the land use pattern. The contribution of point sources in the upper and lower stretches are 95 and 81% respectively of the total flow of the river while there is no point source input in the middle stretch. Mass balance calculations indicate that contribution of nitrate and phosphate from non-point sources amounts to 15.5 and 6.9% in the upper stretch and 13.1 and 16.6% in the lower stretch respectively. Observed differences in the load along the river may be attributed to uncharacterized sources of pollution due to agricultural activities, remobilization from or entrainment of contaminated bottom sediments, ground water contribution or a combination of these sources.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1990-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
A Baseline Load Schedule for the Manual Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.
2013-01-01
A baseline load schedule for the manual calibration of a force balance was developed that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, the aft gage location, and the balance moment center; (iv) the balance should be used in UP and DOWN orientation to get axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. Three different approaches are also reviewed that may be used to independently estimate the natural zeros of the balance. These three approaches provide gage output differences that may be used to estimate the weight of both the metric and non-metric part of the balance. Manual calibration data of NASA s MK29A balance and machine calibration data of NASA s MC60D balance are used to illustrate and evaluate different aspects of the proposed baseline load schedule design.
A Baseline Load Schedule for the Manual Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.
2013-01-01
A baseline load schedule for the manual calibration of a force balance is defined that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The chosen load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, aft gage location, and the balance moment center; (iv) the balance should be used in "up" and "down" orientation to get positive and negative axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. In addition, three different approaches are discussed in the paper that may be used to independently estimate the natural zeros, i.e., the gage outputs of the absolute load datum of the balance. These three approaches provide gage output differences that can be used to estimate the weight of both the metric and non-metric part of the balance. Data from the calibration of a six-component force balance will be used in the final manuscript of the paper to illustrate characteristics of the proposed baseline load schedule.
Strain gage selection in loads equations using a genetic algorithm
NASA Technical Reports Server (NTRS)
1994-01-01
Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.
Dynamic load balancing in a concurrent plasma PIC code on the JPL/Caltech Mark III hypercube
Liewer, P.C.; Leaver, E.W.; Decyk, V.K.; Dawson, J.M.
1990-12-31
Dynamic load balancing has been implemented in a concurrent one-dimensional electromagnetic plasma particle-in-cell (PIC) simulation code using a method which adds very little overhead to the parallel code. In PIC codes, the orbits of many interacting plasma electrons and ions are followed as an initial value problem as the particles move in electromagnetic fields calculated self-consistently from the particle motions. The code was implemented using the GCPIC algorithm in which the particles are divided among processors by partitioning the spatial domain of the simulation. The problem is load-balanced by partitioning the spatial domain so that each partition has approximately the same number of particles. During the simulation, the partitions are dynamically recreated as the spatial distribution of the particles changes in order to maintain processor load balance.
Carmichael, H.
1953-01-01
A torsional-type analytical balance designed to arrive at its equilibrium point more quickly than previous balances is described. In order to prevent external heat sources creating air currents inside the balance casing that would reiard the attainment of equilibrium conditions, a relatively thick casing shaped as an inverted U is placed over the load support arms and the balance beam. This casing is of a metal of good thernnal conductivity characteristics, such as copper or aluminum, in order that heat applied to one portion of the balance is quickly conducted to all other sensitive areas, thus effectively preventing the fornnation of air currents caused by unequal heating of the balance.
A new fitting algorithm for petrological mass-balance problems
NASA Astrophysics Data System (ADS)
Krawczynski, M. J.; Olive, J. L.
2011-12-01
We present a suite of Matlab programs aimed at solving linear mixing problems in which a composition must be assessed as the convex linear mixture of a known number of end-member compositions (e.g. mineral and melt chemical analyses). It is often the case in experimental petrology that answering a geochemical question involves solving a system of linear mass balance equations. Calculating phase proportions of an experimental charge to determine crystallinity, comparing experimental phase compositions to determine melting/crystallization reactions, and checking the chemical closure of your experimental system, are a few examples of these types of problems. Our algorithm is based on the isometric log-ratio transform, a one-to-one mapping between composition space and an "unconstrained" Euclidian space where standard inversion procedures apply (Egozcue et al., 2003). It allows the consideration of a-priori knowledge and uncertainties on endmember and bulk compositions as well as phase-proportions. It offers an improvement over the typical compositional space algorithms (Bryan et al., 1969; Albarede and Provost, 1977). We have tested our method on synthetic and experimental data sets, and report the uncertainties on phase abundances. The algorithm presented here eliminates the common problem of calculated phase proportions that produce negative mass balance coefficients. In addition, we show how the method can be used to estimate uncertainties on the coefficients for experimentally determined mantle melting equations.
A Hybrid Ant Colony Algorithm for Loading Pattern Optimization
NASA Astrophysics Data System (ADS)
Hoareau, F.
2014-06-01
Electricité de France (EDF) operates 58 nuclear power plant (NPP), of the Pressurized Water Reactor (PWR) type. The loading pattern (LP) optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R&D has developed automatic optimization tools that assist the experts. The latter can resort, for instance, to a loading pattern optimization software based on ant colony algorithm. This paper presents an analysis of the search space of a few realistic loading pattern optimization problems. This analysis leads us to introduce a hybrid algorithm based on ant colony and a local search method. We then show that this new algorithm is able to generate loading patterns of good quality.
GRACOS: Scalable and Load Balanced P3M Cosmological N-body Code
NASA Astrophysics Data System (ADS)
Shirokov, Alexander; Bertschinger, Edmund
2010-10-01
The GRACOS (GRAvitational COSmology) code, a parallel implementation of the particle-particle/particle-mesh (P3M) algorithm for distributed memory clusters, uses a hybrid method for both computation and domain decomposition. Long-range forces are computed using a Fourier transform gravity solver on a regular mesh; the mesh is distributed across parallel processes using a static one-dimensional slab domain decomposition. Short-range forces are computed by direct summation of close pairs; particles are distributed using a dynamic domain decomposition based on a space-filling Hilbert curve. A nearly-optimal method was devised to dynamically repartition the particle distribution so as to maintain load balance even for extremely inhomogeneous mass distributions. Tests using 800(3) simulations on a 40-processor beowulf cluster showed good load balance and scalability up to 80 processes. There are limits on scalability imposed by communication and extreme clustering which may be removed by extending the algorithm to include adaptive mesh refinement.
Combined Load Diagram for a Wind Tunnel Strain-Gage Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
Combined Load Diagrams for Direct-Read, Force, and Moment Balances are discussed in great detail in the paper. The diagrams, if compared with a corresponding combined load plot of a balance calibration data set, may be used to visualize and interpret basic relationships between the applied balance calibration loads and the load components at the forward and aft gage of a strain-age balance. Lines of constant total force and moment are identified in the diagrams. In addition, the lines of pure force and pure moment are highlighted. Finally, lines of constant moment arm are depicted. It is also demonstrated that each quadrant of a Combined Load Diagram has specific regions where the applied total calibration force is at, between, or outside of the balance gage locations. Data from the manual calibration of a Force Balance is used to illustrate the application of a Combined Load Diagram to a realistic data set.
A single-stage optical load-balanced switch for data centers.
Huang, Qirui; Yeo, Yong-Kee; Zhou, Luying
2012-10-22
Load balancing is an attractive technique to achieve maximum throughput and optimal resource utilization in large-scale switching systems. However current electronic load-balanced switches suffer from severe problems in implementation cost, power consumption and scaling. To overcome these problems, in this paper we propose a single-stage optical load-balanced switch architecture based on an arrayed waveguide grating router (AWGR) in conjunction with fast tunable lasers. By reuse of the fast tunable lasers, the switch achieves both functions of load balancing and switching through the AWGR. With this architecture, proof-of-concept experiments have been conducted to investigate the feasibility of the optical load-balanced switch and to examine its physical performance. Compared to three-stage load-balanced switches, the reported switch needs only half of optical devices such as tunable lasers and AWGRs, which can provide a cost-effective solution for future data centers.
The Power of Slightly More than One Sample in Randomized Load Balancing
2015-04-26
2015 Approved for public release; distribution is unlimited. The power of slightly more than one sample in randomized load balancing The views...Box 12211 Research Triangle Park, NC 27709-2211 Load Balancing , Mean-Field Analysis, Cloud Computing REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S...2820 ABSTRACT The power of slightly more than one sample in randomized load balancing Report Title In many computing and networking applications
Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms
Hu, Zhongyi; Xiong, Tao
2013-01-01
Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425
The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava
2016-08-01
This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.
Self-balancing beam permits safe, easy load handling under overhang
NASA Technical Reports Server (NTRS)
Edwards, O. H.
1964-01-01
The use of a self-balancing I-beam with a counterweight and motor simplifies moving heavy loads that are inaccessible for cranes. The beam cannot be overloaded, as the counterweight will not balance the load, and thus acts as an automatic safety device.
Dynamic load balancing of matrix-vector multiplications on roadrunner compute nodes
Sancho Pitarch, Jose Carlos
2009-01-01
Hybrid architectures that combine general purpose processors with accelerators are being adopted in several large-scale systems such as the petaflop Roadrunner supercomputer at Los Alamos. In this system, dual-core Opteron host processors are tightly coupled with PowerXCell 8i processors within each compute node. In this kind of hybrid architecture, an accelerated mode of operation is typically used to offload performance hotspots in the computation to the accelerators. In this paper we explore the suitability of a variant of this acceleration mode in which the performance hotspots are actually shared between the host and the accelerators. To achieve this we have designed a new load balancing algorithm, which is optimized for the Roadrunner compute nodes, to dynamically distribute computation and associated data between the host and the accelerators at runtime. Results are presented using this approach for sparse and dense matrix-vector multiplications that show load-balancing can improve performance by up to 24% over solely using the accelerators.
Heyland, Mark; Trepczynski, Adam; Duda, Georg N; Zehn, Manfred; Schaser, Klaus-Dieter; Märdian, Sven
2015-12-01
Selection of boundary constraints may influence amount and distribution of loads. The purpose of this study is to analyze the potential of inertia relief and follower load to maintain the effects of musculoskeletal loads even under large deflections in patient specific finite element models of intact or fractured bone compared to empiric boundary constraints which have been shown to lead to physiological displacements and surface strains. The goal is to elucidate the use of boundary conditions in strain analyses of bones. Finite element models of the intact femur and a model of clinically relevant fracture stabilization by locking plate fixation were analyzed with normal walking loading conditions for different boundary conditions, specifically re-balanced loading, inertia relief and follower load. Peak principal cortex surface strains for different boundary conditions are consistent (maximum deviation 13.7%) except for inertia relief without force balancing (maximum deviation 108.4%). Influence of follower load on displacements increases with higher deflection in fracture model (from 3% to 7% for force balanced model). For load balanced models, follower load had only minor influence, though the effect increases strongly with higher deflection. Conventional constraints of fixed nodes in space should be carefully reconsidered because their type and position are challenging to justify and for their potential to introduce relevant non-physiological reaction forces. Inertia relief provides an alternative method which yields physiological strain results.
Tseng, Chinyang Henry
2016-01-01
In wireless networks, low-power Zigbee is an excellent network solution for wireless medical monitoring systems. Medical monitoring generally involves transmission of a large amount of data and easily causes bottleneck problems. Although Zigbee’s AODV mesh routing provides extensible multi-hop data transmission to extend network coverage, it originally does not, and needs to support some form of load balancing mechanism to avoid bottlenecks. To guarantee a more reliable multi-hop data transmission for life-critical medical applications, we have developed a multipath solution, called Load-Balanced Multipath Routing (LBMR) to replace Zigbee’s routing mechanism. LBMR consists of three main parts: Layer Routing Construction (LRC), a Load Estimation Algorithm (LEA), and a Route Maintenance (RM) mechanism. LRC assigns nodes into different layers based on the node’s distance to the medical data gateway. Nodes can have multiple next-hops delivering medical data toward the gateway. All neighboring layer-nodes exchange flow information containing current load, which is the used by the LEA to estimate future load of next-hops to the gateway. With LBMR, nodes can choose the neighbors with the least load as the next-hops and thus can achieve load balancing and avoid bottlenecks. Furthermore, RM can detect route failures in real-time and perform route redirection to ensure routing robustness. Since LRC and LEA prevent bottlenecks while RM ensures routing fault tolerance, LBMR provides a highly reliable routing service for medical monitoring. To evaluate these accomplishments, we compare LBMR with Zigbee’s AODV and another multipath protocol, AOMDV. The simulation results demonstrate LBMR achieves better load balancing, less unreachable nodes, and better packet delivery ratio than either AODV or AOMDV. PMID:27258297
Tseng, Chinyang Henry
2016-05-31
In wireless networks, low-power Zigbee is an excellent network solution for wireless medical monitoring systems. Medical monitoring generally involves transmission of a large amount of data and easily causes bottleneck problems. Although Zigbee's AODV mesh routing provides extensible multi-hop data transmission to extend network coverage, it originally does not, and needs to support some form of load balancing mechanism to avoid bottlenecks. To guarantee a more reliable multi-hop data transmission for life-critical medical applications, we have developed a multipath solution, called Load-Balanced Multipath Routing (LBMR) to replace Zigbee's routing mechanism. LBMR consists of three main parts: Layer Routing Construction (LRC), a Load Estimation Algorithm (LEA), and a Route Maintenance (RM) mechanism. LRC assigns nodes into different layers based on the node's distance to the medical data gateway. Nodes can have multiple next-hops delivering medical data toward the gateway. All neighboring layer-nodes exchange flow information containing current load, which is the used by the LEA to estimate future load of next-hops to the gateway. With LBMR, nodes can choose the neighbors with the least load as the next-hops and thus can achieve load balancing and avoid bottlenecks. Furthermore, RM can detect route failures in real-time and perform route redirection to ensure routing robustness. Since LRC and LEA prevent bottlenecks while RM ensures routing fault tolerance, LBMR provides a highly reliable routing service for medical monitoring. To evaluate these accomplishments, we compare LBMR with Zigbee's AODV and another multipath protocol, AOMDV. The simulation results demonstrate LBMR achieves better load balancing, less unreachable nodes, and better packet delivery ratio than either AODV or AOMDV.
Experience with automatic, dynamic load balancing and adaptive finite element computation
Wheat, S.R.; Devine, K.D.; Maccabe, A.B.
1993-10-01
Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.
Comparison of Iterative and Non-Iterative Strain-Gage Balance Load Calculation Methods
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
The accuracy of iterative and non-iterative strain-gage balance load calculation methods was compared using data from the calibration of a force balance. Two iterative and one non-iterative method were investigated. In addition, transformations were applied to balance loads in order to process the calibration data in both direct read and force balance format. NASA's regression model optimization tool BALFIT was used to generate optimized regression models of the calibration data for each of the three load calculation methods. This approach made sure that the selected regression models met strict statistical quality requirements. The comparison of the standard deviation of the load residuals showed that the first iterative method may be applied to data in both the direct read and force balance format. The second iterative method, on the other hand, implicitly assumes that the primary gage sensitivities of all balance gages exist. Therefore, the second iterative method only works if the given balance data is processed in force balance format. The calibration data set was also processed using the non-iterative method. Standard deviations of the load residuals for the three load calculation methods were compared. Overall, the standard deviations show very good agreement. The load prediction accuracies of the three methods appear to be compatible as long as regression models used to analyze the calibration data meet strict statistical quality requirements. Recent improvements of the regression model optimization tool BALFIT are also discussed in the paper.
Mori, Yoshiharu; Okumura, Hisashi
2015-12-05
Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm.
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
Algorithms for Determining Physical Responses of Structures Under Load
NASA Technical Reports Server (NTRS)
Richards, W. Lance; Ko, William L.
2012-01-01
Ultra-efficient real-time structural monitoring algorithms have been developed to provide extensive information about the physical response of structures under load. These algorithms are driven by actual strain data to measure accurately local strains at multiple locations on the surface of a structure. Through a single point load calibration test, these structural strains are then used to calculate key physical properties of the structure at each measurement location. Such properties include the structure s flexural rigidity (the product of the structure's modulus of elasticity, and its moment of inertia) and the section modulus (the moment of inertia divided by the structure s half-depth). The resulting structural properties at each location can be used to determine the structure s bending moment, shear, and structural loads in real time while the structure is in service. The amount of structural information can be maximized through the use of highly multiplexed fiber Bragg grating technology using optical time domain reflectometry and optical frequency domain reflectometry, which can provide a local strain measurement every 10 mm on a single hair-sized optical fiber. Since local strain is used as input to the algorithms, this system serves multiple purposes of measuring strains and displacements, as well as determining structural bending moment, shear, and loads for assessing real-time structural health. The first step is to install a series of strain sensors on the structure s surface in such a way as to measure bending strains at desired locations. The next step is to perform a simple ground test calibration. For a beam of length l (see example), discretized into n sections and subjected to a tip load of P that places the beam in bending, the flexural rigidity of the beam can be experimentally determined at each measurement location x. The bending moment at each station can then be determined for any general set of loads applied during operation.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Genetic Algorithm based Decentralized PI Type Controller: Load Frequency Control
NASA Astrophysics Data System (ADS)
Dwivedi, Atul; Ray, Goshaidas; Sharma, Arun Kumar
2016-12-01
This work presents a design of decentralized PI type Linear Quadratic (LQ) controller based on genetic algorithm (GA). The proposed design technique allows considerable flexibility in defining the control objectives and it does not consider any knowledge of the system matrices and moreover it avoids the solution of algebraic Riccati equation. To illustrate the results of this work, a load-frequency control problem is considered. Simulation results reveal that the proposed scheme based on GA is an alternative and attractive approach to solve load-frequency control problem from both performance and design point of views.
Load Balancing in Stochastic Networks: Algorithms, Analysis, and Game Theory
2014-04-16
expansion. How 41 44 do these results change with general service times or heterogeneous service times? Is it possible to provide an incentive that will move...opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of the Army position...geometric with parameter ?, and thus has an exponential decay rate. When d ? 2, the model is not exactly solvable, but asymptotic results show that as n
Assessment of New Load Schedules for the Machine Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.; Kew, R.
2015-01-01
New load schedules for the machine calibration of a six-component force balance are currently being developed and evaluated at the NASA Ames Balance Calibration Laboratory. One of the proposed load schedules is discussed in the paper. It has a total of 2082 points that are distributed across 16 load series. Several criteria were applied to define the load schedule. It was decided, for example, to specify the calibration load set in force balance format as this approach greatly simplifies the definition of the lower and upper bounds of the load schedule. In addition, all loads are assumed to be applied in a calibration machine by using the one-factor-at-a-time approach. At first, all single-component loads are applied in six load series. Then, three two-component load series are applied. They consist of the load pairs (N1, N2), (S1, S2), and (RM, AF). Afterwards, four three-component load series are applied. They consist of the combinations (N1, N2, AF), (S1, S2, AF), (N1, N2, RM), and (S1, S2, RM). In the next step, one four-component load series is applied. It is the load combination (N1, N2, S1, S2). Finally, two five-component load series are applied. They are the load combination (N1, N2, S1, S2, AF) and (N1, N2, S1, S2, RM). The maximum difference between loads of two subsequent data points of the load schedule is limited to 33 % of capacity. This constraint helps avoid unwanted load "jumps" in the load schedule that can have a negative impact on the performance of a calibration machine. Only loadings of the single- and two-component load series are loaded to 100 % of capacity. This approach was selected because it keeps the total number of calibration points to a reasonable limit while still allowing for the application of some of the more complex load combinations. Data from two of NASA's force balances is used to illustrate important characteristics of the proposed 2082-point calibration load schedule.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Simon, Horst D.
1996-01-01
The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.
Oxidative Balance in Rats during Adaptation to Swimming Load.
Elikov, A V
2016-12-01
The main parameters of free radical oxidation and antioxidant defense in the blood plasma, erythrocytes, and homogenates of skeletal muscles, heart, liver, lungs, and kidneys were studied in adult outbred albino male rats with different degree of adaptation to moderate exposure to swimming. The rats were trained to swim regularly over 1 month. Changes in oxidative balance varied in organs and tissues and depended on the level of training. Malonic dialdehyde content in the erythrocytes after swimming increased by 13.8% in non-trained animals, but decreased by 19.2% in trained rats. Parameters of blood plasma reflect the general oxidative balance of organs and tissues.
Balancing the Load: How to Engage Counselors in School Improvement
ERIC Educational Resources Information Center
Mallory, Barbara J.; Jackson, Mary H.
2007-01-01
Principals cannot lead the school improvement process alone. They must enlist the help of others in the school community. School counselors, whose role is often viewed as peripheral and isolated from teaching and learning, can help principals, teachers, students, and parents balance the duties and responsibilities involved in continuous student…
Fast computing global structural balance in signed networks based on memetic algorithm
NASA Astrophysics Data System (ADS)
Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng
2014-12-01
Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.
Dynamic load balancing data centric storage for wireless sensor networks.
Song, Seokil; Bok, Kyoungsoo; Kwak, Yun Sik; Goo, Bongeun; Kwak, Youngsik; Ko, Daesik
2010-01-01
In this paper, a new data centric storage that is dynamically adapted to the work load changes is proposed. The proposed data centric storage distributes the load of hot spot areas to neighboring sensor nodes by using a multilevel grid technique. The proposed method is also able to use existing routing protocols such as GPSR (Greedy Perimeter Stateless Routing) with small changes. Through simulation, the proposed method enhances the lifetime of sensor networks over one of the state-of-the-art data centric storages. We implement the proposed method based on an operating system for sensor networks, and evaluate the performance through running based on a simulation tool.
Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.
Exercise, muscle, and the applied load-bone strength balance.
Giangregorio, L; El-Kotob, R
2017-01-01
A fracture occurs when the applied load is greater than the bone can withstand. Clinical practice guidelines for the management of osteoporosis include recommendations for exercise; one of the few therapies where the proposed anti-fracture mechanisms that include effects on both bone strength and applied loads, where applied loads can come in the form of a fall, externally applied loads, body weight, or muscle forces. The aim of this review is to provide an overview of the clinical evidence pertaining to the potential efficacy of exercise for preventing fractures in older adults, including its direct effects on outcomes along the causal pathway to fractures (e.g., falls, posture, bone strength) and the indirect effects on muscle or the muscle-bone relationship. The evidence is examined as it pertains to application in clinical practice. Considerations for future research are discussed, such as the need for trials in individuals with low bone mass or students that evaluate whether changes in muscle mediate changes in bone. Future trials should also consider adequacy of calorie or protein intake, the confounding effect of exercise-induced weight loss, or the most appropriate therapeutic goal (e.g., strength, weight bearing, or hypertrophy) and outcome measures (e.g., fracture, disability, cost-effectiveness).
Comparison of hiking stick use on lateral stability while balancing with and without a load.
Jacobson, B H; Caldwell, B; Kulling, F A
1997-08-01
To compare hiking stick use on lateral stability while balancing with or without a load (15-kg internal frame backpack) under conditions of no stick, 1 stick, and 2 sticks for six trials 15 volunteers ages 19 to 23 years (M = 21.7 yr.) were tested six separate times on a stability platform. During randomly ordered, 1-min. trials, the length of time (sec.) the subject maintained balance (+/-10 degrees of horizontal) and the number of deviations beyond 10 degrees were recorded simultaneously. Backpack and hiking sticks were individually adjusted for each subject. A 2 x 3 repeated factor analysis of variance indicated that subjects balanced significantly longer both with and without a load while using 2 hiking sticks than 1 or 0 sticks. Significantly fewer deviations beyond 10 degrees were found when subjects were without a load and using 1 or 2 sticks versus when they used none, and no significant difference in the number of deviations were found between 1 and 2 hiking sticks. When subjects were equipped with a load, significantly improved balance was found only between the 2 sticks and no sticks. Balance was significantly enhanced by using hiking sticks, and two sticks were more effective than one while carrying a load. An increase in maintenance of static balance may reduce the possibility of falling and injury while standing on loose alpine terrain.
Effects of partitioning and scheduling sparse matrix factorization on communication and load balance
NASA Technical Reports Server (NTRS)
Venugopal, Sesh; Naik, Vijay K.
1991-01-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Dynamic Load Balancing for Finite Element Calculations on Parallel Computers. Chapter 1
NASA Technical Reports Server (NTRS)
Pramono, Eddy; Simon, Horst D.; Sohn, Andrew; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a frame work is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine SP2.
Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Hip joint contact loads in older adults during recovery from forward loss of balance by stepping.
Graham, David F; Modenese, Luca; Trewartha, Grant; Carty, Christopher P; Constantinou, Maria; Lloyd, David G; Barrett, Rod S
2016-09-06
Hip joint contact loads during activities of daily living are not generally considered high enough to cause acute bone or joint injury. However there is some evidence that hip joint loads may be higher in stumble recovery from loss of balance. A common laboratory method used to evaluate balance recovery performance involves suddenly releasing participants from various static forward lean magnitudes (perturbation intensities). Prior studies have shown that when released from the same perturbation intensity, some older adults are able to recover with a single step, whereas others require multiple steps. The main purpose of this study was to use a musculoskeletal model to determine the effect of three balance perturbation intensities and the use of single versus multiple recovery steps on hip joint contact loads during recovery from forward loss of balance in community dwelling older adults (n=76). We also evaluated the association of peak hip contact loads with perturbation intensity, step length and trunk flexion angle at foot contact at each participant׳s maximum recoverable lean angle (MRLA). Peak hip joint contact loads were computed using muscle force estimates obtained using Static Optimisation and increased as lean magnitude was increased and were on average 32% higher for Single Steppers compared to Multiple Steppers. At the MRLA, peak hip contact loads ranged from 4.3 to 12.7 body weights and multiple linear stepwise regression further revealed that initial lean angle, step length and trunk angle at foot contact together explained 27% of the total variance in hip joint contact load. Overall findings indicated that older adults experience peak hip joint contact loads during maximal balance recovery by stepping that in some cases exceeded loads reported to cause mechanical failure of cadaver femurs. While step length and trunk flexion angle are strong predictors of step recovery performance they are at best moderate predictors of peak hip joint loading.
Sivakumar, B; Bhalaji, N; Sivakumar, D
2014-01-01
In mobile ad hoc networks connectivity is always an issue of concern. Due to dynamism in the behavior of mobile nodes, efficiency shall be achieved only with the assumption of good network infrastructure. Presence of critical links results in deterioration which should be detected in advance to retain the prevailing communication setup. This paper discusses a short survey on the specialized algorithms and protocols related to energy efficient load balancing for critical link detection in the recent literature. This paper also suggests a machine learning based hybrid power-aware approach for handling critical nodes via load balancing.
Sivakumar, B.; Bhalaji, N.; Sivakumar, D.
2014-01-01
In mobile ad hoc networks connectivity is always an issue of concern. Due to dynamism in the behavior of mobile nodes, efficiency shall be achieved only with the assumption of good network infrastructure. Presence of critical links results in deterioration which should be detected in advance to retain the prevailing communication setup. This paper discusses a short survey on the specialized algorithms and protocols related to energy efficient load balancing for critical link detection in the recent literature. This paper also suggests a machine learning based hybrid power-aware approach for handling critical nodes via load balancing. PMID:24790546
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our
Load balancing and closed chain multiple arm control
NASA Technical Reports Server (NTRS)
Kreutz, Kenneth; Lokshin, Anatole
1988-01-01
The authors give the general dynamical equations for several rigid link manipulators rigidly grasping a commonly held rigid object. It is shown that the number of arm-configuration degrees of freedom lost due to imposing the closed-loop kinematic constraints is the same as the number of degrees of freedom gained for controlling the internal forces of the closed-chain system. This number is equal to the dimension of the kernel of the Jacobian operator which transforms contact forces to the net forces acting on the held object, and it is shown that this kernel can be identified with the subspace of controllable internal forces of the closed-chain system. Control of these forces makes it possible to regulate the grasping forces imparted to the held object or to control the load taken by each arm. It is shown that the internal forces can be influenced without affecting the control of the configuration degrees of freedom. Control laws of the feedback linearization type are shown to be useful for controlling the location and attitude of a frame fixed with respect to the held object, while simultaneously controlling the internal forces of the closed-chain system. Force feedback can be used to linearize and control the system even when the held object has unknown mass properties. If saturation effects are ignored, an unconstrained quadratic optimization can be performed to distribute the load optimally among the joint actuators.
Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs
Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang
2015-01-01
Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n2), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k2n2) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652
Load-balancing techniques for a parallel electromagnetic particle-in-cell code
PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.
2000-01-01
QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.
Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs.
Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang
2015-01-01
Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n (2)), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k (2) n (2)) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results.
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
STAR load balancing and tiered-storage infrastructure strategy for ultimate db access
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.; Betts, W.; Didenko, L.; Van Buren, G.
2011-12-01
In recent years, the STAR experiment's database demands have grown in accord not only with simple facility growth, but also with a growing physics program. In addition to the accumulated metadata from a decade of operations, refinements to detector calibrations force user analysis to access database information post data production. Users may access any year's data at any point in time, causing a near random access of the metadata queried, contrary to time-organized production cycles. Moreover, complex online event selection algorithms created a query scarcity ("sparsity") scenario for offline production further impacting performance. Fundamental changes in our hardware approach were hence necessary to improve query speed. Initial strategic improvements were focused on developing fault-tolerant, load-balanced access to a multi-slave infrastructure. Beyond that, we explored, tested and quantified the benefits of introducing a Tiered storage architecture composed of conventional drives, solid-state disks, and memory-resident databases as well as leveraging the use of smaller database services fitting in memory. The results of our extensive testing in real life usage are presented.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.
NASA Technical Reports Server (NTRS)
Watson, Brian; Kamat, M. P.
1990-01-01
Element-by-element preconditioned conjugate gradient (EBE-PCG) algorithms have been advocated for use in parallel/vector processing environments as being superior to the conventional LDL(exp T) decomposition algorithm for single load cases. Although there may be some advantages in using such algorithms for a single load case, when it comes to situations involving multiple load cases, the LDL(exp T) decomposition algorithm would appear to be decidedly more cost-effective. The authors have outlined an EBE-PCG algorithm suitable for multiple load cases and compared its effectiveness to the highly efficient LDL(exp T) decomposition scheme. The proposed algorithm offers almost no advantages over the LDL(exp T) algorithm for the linear problems investigated on the Alliant FX/8. However, there may be some merit in the algorithm in solving nonlinear problems with load incrementation, but that remains to be investigated.
Scan Directed Load Balancing for Highly-Parallel Mesh-Connected Computers
1991-07-01
DTIC ~ ELECTE OCT 2 41991 AD-A242 045 Scan Directed Load Balancing for Highly-Parallel Mesh-Connected Computers’ Edoardo S. Biagioni Jan F. Prins...Department of Computer Science University of North Carolina Chapel Hill, N.C. 27599-3175 USA biagioni @cs.unc.edu prinsOcs.unc.edu Abstract Scan Directed...MasPar Computer Corpora- tion. Bibliography [1] Edoardo S. Biagioni . Scan Directed Load Balancing. PhD thesis., University of North Carolina, Chapel Hill
A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning
Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2013-11-17
In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.
Effect of armor and carrying load on body balance and leg muscle function.
Park, Huiju; Branson, Donna; Kim, Seonyoung; Warren, Aric; Jacobson, Bert; Petrova, Adriana; Peksoz, Semra; Kamenidis, Panagiotis
2014-01-01
This study investigated the impact of weight and weight distribution of body armor and load carriage on static body balance and leg muscle function. A series of human performance tests were conducted with seven male, healthy, right-handed military students in seven garment conditions with varying weight and weight distributions. Static body balance was assessed by analyzing the trajectory of center of plantar pressure and symmetry of weight bearing in the feet. Leg muscle functions were assessed by analyzing the peak electromyography amplitude of four selected leg muscles during walking. Results of this study showed that uneven weight distribution of garment and load beyond an additional 9 kg impaired static body balance as evidenced by increased sway of center of plantar pressure and asymmetry of weight bearing in the feet. Added weight on non-dominant side of the body created greater impediment to static balance. Increased garment weight also elevated peak EMG amplitude in the rectus femoris to maintain body balance and in the medial gastrocnemius to increase propulsive force. Negative impacts on balance and leg muscle function with increased carrying loads, particularly with an uneven weight distribution, should be stressed to soldiers, designers, and sports enthusiasts.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic
Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.
Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente
2015-08-10
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively.
Design and implementation of web server soft load balancing in small and medium-sized enterprise
NASA Astrophysics Data System (ADS)
Yan, Liu
2011-12-01
With the expansion of business scale, small and medium-sized enterprises began to use information platform to improve their management and competition ability, the server becomes the core factor which restricts the enterprise's infomationization construction. This paper puts forward a suitable design scheme for small and medium-sized enterprise web server soft load balancing, and proved it effective through experiment.
Mesoscopi Detailed Balance Algorithms for Quantum and Classical Turbulence
2013-02-01
the one-dimensional Magnetohydrodynamics-Burgers equations, KdV and nonlinear Schrodinger equations. Generalizing to three dimensions, quantum...Solitons We have investigated quantum unitary algorithms for both the 1D Korteweg-de-Vries and the Nonlinear Schrodinger equations [G. Vahala, J...Yepez and L. Vahala, Phys. Lett. A310, 187- 196 (2003)]. In particular to recover the 1D nonlinear Schrodinger equation for bright solitons
Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.
2012-01-01
We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.
Gammon - A load balancing strategy for local computer systems with multiaccess networks
NASA Technical Reports Server (NTRS)
Baumgartner, Katherine M.; Wah, Benjamin W.
1989-01-01
Consideration is given to an efficient load-balancing strategy, Gammon (global allocation from maximum to minimum in constant time), for distributed computing systems connected by multiaccess local area networks. The broadcast capability of these networks is utilized to implement an identification procedure at the applications level for the maximally and the minimally loaded processors. The search technique has an average overhead which is independent of the number of participating stations. An implementation of Gammon on a network of Sun workstations is described. Its performance is found to be better than that of other known methods.
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
NASA Astrophysics Data System (ADS)
Cleveland, Mathew A.; Palmer, Todd S.
2013-09-01
Thermal heating from radiative heat transfer can have a significant effect on combustion systems. A variety of models have been developed to represent the strongly varying opacities found in combustion gases (Goutiere et al., 2000). This work evaluates the computational efficiency and load balance issues associated with two opacity models implemented in a 3D parallel Monte Carlo solver: the spectral-line-based weighted sum of gray gases (SLW) (Denison and Webb, 1993) and the spectral line-by-line (LBL) (Wang and Modest, 2007) opacity models. The parallel performance of the opacity models is evaluated using the Su and Olson (1999) frequency-dependent semi-analytic benchmark problem. Weak scaling, strong scaling, and history scaling studies were performed and comparisons were made for each opacity model. Comparisons of load balance sensitivities to these types of scaling were also evaluated. It was found that the SLW model has some attributes that might be valuable in a select set of parallel problems.
NASA Astrophysics Data System (ADS)
Hattori, Toshihiro; Takamatsu, Rieko
We calculated nitrogen balances on farm gate and soil surface on large-scale stock farms and discussed methods for reducing environmental nitrogen loads. Four different types of public stock farms (organic beef, calf supply and daily cows) were surveyed in Aomori Prefecture. (1) Farm gate and soil surface nitrogen inflows were both larger than the respective outflows on all types of farms. Farm gate nitrogen balance for beef farms were worse than that for dairy farms. (2) Soil surface nitrogen outflows and soil nitrogen retention were in proportion to soil surface nitrogen inflows. (3) Reductions in soil surface nitrogen retention were influenced by soil surface nitrogen inflows. (4) In order to reduce farm gate nitrogen retention, inflows of formula feed and chemical fertilizer need to be reduced. (5) In order to reduce soil surface nitrogen retention, inflows of fertilizer need to be reduced and nitrogen balance needs to be controlled.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Development of a dual strain gage balance system for measuring light loads
NASA Technical Reports Server (NTRS)
Roberts, Paul W.
1989-01-01
A strain-gage-balance (SGB) force-measurement system is described which was designed to meet light-load requirements for an airfoil model tested in the NASA Langley low-turbulence pressure tunnel (LTPT). This system was developed to obtain direct force data needed to verify calculated pressure/force correlations used in previous aerodynamic tests. The three-component force-measurement system was designed so that the SGBs would simultaneously support and measure the following loads: weight of the airfoil (approximately 3.0 lbf); 8.0 lbf of lift; 0.5 lbf of drag; and 16.0 in. lbf of pitching moment. In addition to these design loads, the system was required to withstand 100-percent overload on all three components. The system comprises an airfoil, two SGBs, a thermal flexure, and a mounting plate. The installation of the system in the LTPT is also discussed.
Chow, Daniel H K; Leung, Dawn S S; Holmes, Andrew D
2007-09-01
The balance function of children is known to be affected by carriage of a school backpack. Children with adolescent idiopathic scoliosis (AIS) tend to show poorer balance performance, and are typically treated by bracing, which further affects balance. The objective of this study is to examine the combined effects of school backpack carriage and bracing on girls with AIS. A force platform was used to record center of pressure (COP) motion in 20 schoolgirls undergoing thoraco-lumbar-sacral orthosis (TLSO brace) treatment for AIS. COP data were recorded with and without brace while carrying a backpack loaded at 0, 7.5, 10, 12.5 and 15% of the participant's bodyweight (BW). Ten participants stood on a solid base and ten stood on a foam base, while all participants kept their eyes closed throughout. Sway parameters were analyzed by repeated measures ANOVA. No effect of bracing was found for the participants standing on the solid base, but wearing the brace significantly increased the sway area, displacement and medio-lateral amplitude in the participants standing on the foam base. The medio-lateral sway amplitude of participants standing on the solid base significantly increased with backpack load, whereas significant increases in antero-posterior sway amplitude, sway path length, sway area per second and short term diffusion coefficient were found in participants standing on the foam base. The poorer balance performance exhibited by participants with AIS when visual and somatosensory input is challenged appears to be exacerbated by wearing a TLSO brace, but no interactive effect between bracing and backpack loading was found.
NASA Astrophysics Data System (ADS)
Wang, J.; Samms, T.; Meier, C.; Simmons, L.; Miller, D.; Bathke, D.
2005-12-01
Spatial evapotranspiration (ET) is usually estimated by Surface Energy Balance Algorithm for Land. The average accuracy of the algorithm is 85% on daily basis and 95% on seasonable basis. However, the accuracy of the algorithm varies from 67% to 95% on instantaneous ET estimates and, as reported in 18 studies, 70% to 98% on 1 to 10-day ET estimates. There is a need to understand the sensitivity of the ET calculation with respect to the algorithm variables and equations. With an increased understanding, information can be developed to improve the algorithm, and to better identify the key variables and equations. A Modified Surface Energy Balance Algorithm for Land (MSEBAL) was developed and validated with data from a pecan orchard and an alfalfa field. The MSEBAL uses ground reflectance and temperature data from ASTER sensors along with humidity, wind speed, and solar radiation data from a local weather station. MSEBAL outputs hourly and daily ET with 90 m by 90 m resolution. A sensitivity analysis was conducted for MSEBAL on ET calculation. In order to observe the sensitivity of the calculation to a particular variable, the value of that variable was changed while holding the magnitudes of the other variables. The key variables and equations to which the ET calculation most sensitive were determined in this study. href='http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE">http://weather.nmsu.edu/pecans/SEBALFolder/San%20Francisco%20AGU%20meeting/ASensitivityAnalysisonMSE
Development of a two wheeled self balancing robot with speech recognition and navigation algorithm
NASA Astrophysics Data System (ADS)
Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh
2016-07-01
This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.
Application of self-balanced loading test to socketed pile in weak rock
NASA Astrophysics Data System (ADS)
Cheng, Ye; Gong, Weiming; Dai, Guoliang; Wu, JingKun
2008-11-01
Method of self-balanced loading test differs from the traditional methods of pile test. The key equipment of the test is a cell. The cell specially designed is used to exert load which is placed in pile body. During the test, displacement values of the top plate and the bottom plate of the cell are recorded according to every level of load. So Q-S curves can be obtained. In terms of test results, the bearing capacity of pile can be judged. Equipments of the test are simply and cost of it is low. Under some special conditions, the method will take a great advantage. In Guangxi Province, tertiary mudstone distributes widely which is typical weak rock. It is usually chosen as the bearing stratum of pile foundation. In order to make full use of its high bearing capacity, pile is generally designed as belled pile. Foundations of two high-rise buildings which are close to each other are made up of belled socketed piles in weak rock. To obtain the bearing capacity of the belled socketed pile in weak rock, loading test in situ should be taken since it is not reasonable that experimental compression strength of the mudstone is used for design. The self-balanced loading test was applied to eight piles of two buildings. To get the best test effect, the assembly of cell should be taken different modes in terms of the depth that pile socketed in rock and the dimension of the enlarged toe. The assembly of cells had been taken three modes, and tests were carried on successfully. By the self-balanced loading test, the large bearing capacities of belled socketed piles were obtained. Several key parameters required in design were achieved from the tests. For the data of tests had been analyzed, the bearing performance of pile tip, pile side and whole pile was revealed. It is further realized that the bearing capacity of belled socketed pile in the mudstone will decrease after the mudstone it socketed in has been immerged. Among kinds of mineral ingredient in the mudstone
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
A low overhead load balancing router for network-on-chip
NASA Astrophysics Data System (ADS)
Xiaofeng, Zhou; Lu, Liu; Zhangming, Zhu; Duan, Zhou
2016-11-01
The design of a router in a network-on-chip (NoC) system has an important impact on some performance criteria. In this paper, we propose a low overhead load balancing router (LOLBR) for 2D mesh NoC to enhance routing performance criteria with low hardware overhead. The proposed LOLBR employs a balance toggle identifier to control the initial routing direction of X or Y for flit injection. The simplified demultiplexers and multiplexers are used to handle output ports allocation and contention, which provide a guarantee of deadlock avoidance. Simulation results show that the proposed LOLBR yields an improvement of routing performance over the reported routing schemes in average packet latency by 26.5%. The layout area and power consumption of the network compared with the reported routing schemes are 15.3% and 11.6% less respectively. Project supported by the National Natural Science Foundation of China (Nos. 61474087, 61322405, 61376039).
Solar Load Voltage Tracking for Water Pumping: An Algorithm
NASA Astrophysics Data System (ADS)
Kappali, M.; Udayakumar, R. Y.
2014-07-01
Maximum power is to be harnessed from solar photovoltaic (PV) panel to minimize the effective cost of solar energy. This is accomplished by maximum power point tracking (MPPT). There are different methods to realise MPPT. This paper proposes a simple algorithm to implement MPPT lv method in a closed loop environment for centrifugal pump driven by brushed PMDC motor. Simulation testing of the algorithm is done and the results are found to be encouraging and supportive of the proposed method MPPT lv .
Agent based modeling of "crowdinforming" as a means of load balancing at emergency departments.
Neighbour, Ryan; Oppenheimer, Luis; Mukhi, Shamir N; Friesen, Marcia R; McLeod, Robert D
2010-01-01
This work extends ongoing development of a framework for modeling the spread of contact-transmission infectious diseases. The framework is built upon Agent Based Modeling (ABM), with emphasis on urban scale modelling integrated with institutional models of hospital emergency departments. The method presented here includes ABM modeling an outbreak of influenza-like illness (ILI) with concomitant surges at hospital emergency departments, and illustrates the preliminary modeling of 'crowdinforming' as an intervention. 'Crowdinforming', a component of 'crowdsourcing', is characterized as the dissemination of collected and processed information back to the 'crowd' via public access. The objective of the simulation is to allow for effective policy evaluation to better inform the public of expected wait times as part of their decision making process in attending an emergency department or clinic. In effect, this is a means of providing additional decision support garnered from a simulation, prior to real world implementation. The conjecture is that more optimal service delivery can be achieved under balanced patient loads, compared to situations where some emergency departments are overextended while others are underutilized. Load balancing optimization is a common notion in many operations, and the simulation illustrates that 'crowdinforming' is a potential tool when used as a process control parameter to balance the load at emergency departments as well as serving as an effective means to direct patients during an ILI outbreak with temporary clinics deployed. The information provided in the 'crowdinforming' model is readily available in a local context, although it requires thoughtful consideration in its interpretation. The extension to a wider dissemination of information via a web service is readily achievable and presents no technical obstacles, although political obstacles may be present. The 'crowdinforming' simulation is not limited to arrivals of patients at
Tom, Nathan M.; Yu, Yi-Hsiang; Wright, Alan D.; Lawson, Michael
2016-06-24
The aim of this paper is to describe how to control the power-to-load ratio of a novel wave energy converter (WEC) in irregular waves. The novel WEC that is being developed at the National Renewable Energy Laboratory combines an oscillating surge wave energy converter (OSWEC) with control surfaces as part of the structure; however, this work only considers one fixed geometric configuration. This work extends the optimal control problem so as to not solely maximize the time-averaged power, but to also consider the power-take-off (PTO) torque and foundation forces that arise because of WEC motion. The objective function of the controller will include competing terms that force the controller to balance power capture with structural loading. Separate penalty weights were placed on the surge-foundation force and PTO torque magnitude, which allows the controller to be tuned to emphasize either power absorption or load shedding. Results of this study found that, with proper selection of penalty weights, gains in time-averaged power would exceed the gains in structural loading while minimizing the reactive power requirement.
NASA Astrophysics Data System (ADS)
Gautam, Amit Kr.; Gautam, Ajay Kr.; Patel, R. B.
2010-11-01
In order to provide load balancing in clustered sensor deployment, the upstream clusters (near the BS) are kept smaller in size as compared to downstream ones (away from BS). Moreover, geographic awareness is also desirable in order to further enhance energy efficiency. But, this must be cost effective, since most of current location awareness strategies are either cost and weight inefficient (GPS) or are complex, inaccurate and unreliable in operation. This paper presents design and implementation of a Geographic LOad BALanced (GLOBAL) Clustering Protocol for Wireless Sensor Networks. A mathematical formulation is provided for determining the number of sensor nodes in each cluster. This enables uniform energy consumption after the multi-hop data transmission towards BS. Either the sensors can be manually deployed or the clusters be so formed that the sensor are efficiently distributed as per formulation. The latter strategy is elaborated in this contribution. Methods to provide static clustering and custom cluster sizes with location awareness are also provided in the given work. Finally, low mobility node applications can also implement the proposed work.
A Universal Threshold for the Assessment of Load and Output Residuals of Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new universal residual threshold for the detection of load and gage output residual outliers of wind tunnel strain{gage balance data was developed. The threshold works with both the Iterative and Non{Iterative Methods that are used in the aerospace testing community to analyze and process balance data. It also supports all known load and gage output formats that are traditionally used to describe balance data. The threshold's definition is based on an empirical electrical constant. First, the constant is used to construct a threshold for the assessment of gage output residuals. Then, the related threshold for the assessment of load residuals is obtained by multiplying the empirical electrical constant with the sum of the absolute values of all first partial derivatives of a given load component. The empirical constant equals 2.5 microV/V for the assessment of balance calibration or check load data residuals. A value of 0.5 microV/V is recommended for the evaluation of repeat point residuals because, by design, the calculation of these residuals removes errors that are associated with the regression analysis of the data itself. Data from a calibration of a six-component force balance is used to illustrate the application of the new threshold definitions to real{world balance calibration data.
The control algorithm improving performance of electric load simulator
NASA Astrophysics Data System (ADS)
Guo, Chenxia; Yang, Ruifeng; Zhang, Peng; Fu, Mengyao
2017-01-01
In order to improve dynamic performance and signal tracking accuracy of electric load simulator, the influence of the moment of inertia, stiffness, friction, gaps and other factors on the system performance were analyzed on the basis of researching the working principle of load simulator in this paper. The PID controller based on Wavelet Neural Network was used to achieve the friction nonlinear compensation, while the gap inverse model was used to compensate the gap nonlinear. The compensation results were simulated by MATLAB software. It was shown that the follow-up performance of sine response curve of the system became better after compensating, the track error was significantly reduced, the accuracy was improved greatly and the system dynamic performance was improved.
A Novel Control algorithm based DSTATCOM for Load Compensation
NASA Astrophysics Data System (ADS)
R, Sreejith; Pindoriya, Naran M.; Srinivasan, Babji
2015-11-01
Distribution Static Compensator (DSTATCOM) has been used as a custom power device for voltage regulation and load compensation in the distribution system. Controlling the switching angle has been the biggest challenge in DSTATCOM. Till date, Proportional Integral (PI) controller is widely used in practice for load compensation due to its simplicity and ability. However, PI Controller fails to perform satisfactorily under parameters variations, nonlinearities, etc. making it very challenging to arrive at best/optimal tuning values for different operating conditions. Fuzzy logic and neural network based controllers require extensive training and perform better under limited perturbations. Model predictive control (MPC) is a powerful control strategy, used in the petrochemical industry and its application has been spread to different fields. MPC can handle various constraints, incorporate system nonlinearities and utilizes the multivariate/univariate model information to provide an optimal control strategy. Though it finds its application extensively in chemical engineering, its utility in power systems is limited due to the high computational effort which is incompatible with the high sampling frequency in these systems. In this paper, we propose a DSTATCOM based on Finite Control Set Model Predictive Control (FCS-MPC) with Instantaneous Symmetrical Component Theory (ISCT) based reference current extraction is proposed for load compensation and Unity Power Factor (UPF) action in current control mode. The proposed controller performance is evaluated for a 3 phase, 3 wire, 415 V, 50 Hz distribution system in MATLAB Simulink which demonstrates its applicability in real life situations.
Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture
NASA Astrophysics Data System (ADS)
Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea
2014-05-01
Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.
Karthikeyan, M; Raja, T Sree Ranga
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
Agent Based Modeling of “Crowdinforming” as a Means of Load Balancing at Emergency Departments
Neighbour, Ryan; Oppenheimer, Luis; Mukhi, Shamir N.; Friesen, Marcia R.; McLeod, Robert D.
2010-01-01
This work extends ongoing development of a framework for modeling the spread of contact-transmission infectious diseases. The framework is built upon Agent Based Modeling (ABM), with emphasis on urban scale modelling integrated with institutional models of hospital emergency departments. The method presented here includes ABM modeling an outbreak of influenza-like illness (ILI) with concomitant surges at hospital emergency departments, and illustrates the preliminary modeling of ‘crowdinforming’ as an intervention. ‘Crowdinforming’, a component of ‘crowdsourcing’, is characterized as the dissemination of collected and processed information back to the ‘crowd’ via public access. The objective of the simulation is to allow for effective policy evaluation to better inform the public of expected wait times as part of their decision making process in attending an emergency department or clinic. In effect, this is a means of providing additional decision support garnered from a simulation, prior to real world implementation. The conjecture is that more optimal service delivery can be achieved under balanced patient loads, compared to situations where some emergency departments are overextended while others are underutilized. Load balancing optimization is a common notion in many operations, and the simulation illustrates that ‘crowdinforming’ is a potential tool when used as a process control parameter to balance the load at emergency departments as well as serving as an effective means to direct patients during an ILI outbreak with temporary clinics deployed. The information provided in the ‘crowdinforming’ model is readily available in a local context, although it requires thoughtful consideration in its interpretation. The extension to a wider dissemination of information via a web service is readily achievable and presents no technical obstacles, although political obstacles may be present. The ‘crowdinforming’ simulation is not limited to
A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2016-01-01
A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition.
NASA Astrophysics Data System (ADS)
Ghani Abro, Abdul; Mohamad-Saleh, Junita
2014-10-01
The prime motive of economic load dispatch (ELD) is to optimize the production cost of electrical power generation through appropriate division of load demand among online generating units. Bio-inspired optimization algorithms have outperformed classical techniques for optimizing the production cost. Probability-selection artificial bee colony (PS-ABC) algorithm is a recently proposed variant of ABC optimization algorithm. PS-ABC generates optimal solutions using three different mutation equations simultaneously. The results show improved performance of PS-ABC over the ABC algorithm. Nevertheless, all the mutation equations of PS-ABC are excessively self-reinforced and, hence, PS-ABC is prone to premature convergence. Therefore, this research work has replaced the mutation equations and has improved the scout-bee stage of PS-ABC for enhancing the algorithm's performance. The proposed algorithm has been compared with many ABC variants and numerous other optimization algorithms on benchmark functions and ELD test cases. The adapted ELD test cases comprise of transmission losses, multiple-fuel effect, valve-point effect and toxic gases emission constraints. The results reveal that the proposed algorithm has the best capability to yield the optimal solution for the problem among the compared algorithms.
Senay, Gabriel B.
2008-01-01
The main objective of this study is to present an improved modeling technique called Vegetation ET (VegET) that integrates commonly used water balance algorithms with remotely sensed Land Surface Phenology (LSP) parameter to conduct operational vegetation water balance modeling of rainfed systems at the LSP’s spatial scale using readily available global data sets. Evaluation of the VegET model was conducted using Flux Tower data and two-year simulation for the conterminous US. The VegET model is capable of estimating actual evapotranspiration (ETa) of rainfed crops and other vegetation types at the spatial resolution of the LSP on a daily basis, replacing the need to estimate crop- and region-specific crop coefficients.
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.
Li, Harbin; McNulty, Steven G
2007-10-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.
PARALLEL IMPLEMENTATION OF THE TOPAZ OPACITY CODE: ISSUES IN LOAD-BALANCING
Sonnad, V; Iglesias, C A
2008-05-12
The TOPAZ opacity code explicitly includes configuration term structure in the calculation of bound-bound radiative transitions. This approach involves myriad spectral lines and requires the large computational capabilities of parallel processing computers. It is important, however, to make use of these resources efficiently. For example, an increase in the number of processors should yield a comparable reduction in computational time. This proportional 'speedup' indicates that very large problems can be addressed with massively parallel computers. Opacity codes can readily take advantage of parallel architecture since many intermediate calculations are independent. On the other hand, since the different tasks entail significantly disparate computational effort, load-balancing issues emerge so that parallel efficiency does not occur naturally. Several schemes to distribute the labor among processors are discussed.
Load flow and state estimation algorithms for three-phase unbalanced power distribution systems
NASA Astrophysics Data System (ADS)
Madvesh, Chiranjeevi
Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.
NASA Astrophysics Data System (ADS)
Alfredsen, K. T.; Killingtveit, A.
2011-12-01
About 99% of the total energy production in Norway comes from hydropower, and the total production of about 120 TWh makes Norway Europe's largest hydropower producer. Most hydropower systems in Norway are based on high-head plants with mountain storage reservoirs and tunnels transporting water from the reservoirs to the power plants. In total, Norwegian reservoirs contributes around 50% of the total energy storage capacity in Europe. Current strategies to reduce emission of greenhouse gases from energy production involve increased focus on renewable energy sources, e.g. the European Union's 202020 goal in which renewable energy sources should be 20% of the total energy production by 2020. To meet this goal new renewable energy installations must be developed on a large scale in the coming years, and wind power is the main focus for new developments. Hydropower can contribute directly to increase renewable energy through new development or extensions to existing systems, but maybe even more important is the potential to use hydropower systems with storage for load balancing in a system with increased amount of non-storable renewable energies. Even if new storage technologies are under development, hydro storage is the only technology available on a large scale and the most economical feasible alternative. In this respect the Norwegian system has a high potential both through direct use of existing reservoirs and through an increased development of pump storage plants utilizing surplus wind energy to pump water and then producing during periods with low wind input. Through cables to Europe, Norwegian hydropower could also provide balance power for the North European market. Increased peaking and more variable operation of the current hydropower system will present a number of technical and environmental challenges that needs to be identified and mitigated. A more variable production will lead to fluctuating flow in receiving rivers and reservoirs, and it will also
Food composition and acid-base balance: alimentary alkali depletion and acid load in herbivores.
Kiwull-Schöne, Heidrun; Kiwull, Peter; Manz, Friedrich; Kalhoff, Hermann
2008-02-01
Alkali-enriched diets are recommended for humans to diminish the net acid load of their usual diet. In contrast, herbivores have to deal with a high dietary alkali impact on acid-base balance. Here we explore the role of nutritional alkali in experimentally induced chronic metabolic acidosis. Data were collected from healthy male adult rabbits kept in metabolism cages to obtain 24-h urine and arterial blood samples. Randomized groups consumed rabbit diets ad libitum, providing sufficient energy but variable alkali load. One subgroup (n = 10) received high-alkali food and approximately 15 mEq/kg ammonium chloride (NH4Cl) with its drinking water for 5 d. Another group (n = 14) was fed low-alkali food for 5 d and given approximately 4 mEq/kg NH4Cl daily for the last 2 d. The wide range of alimentary acid-base load was significantly reflected by renal base excretion, but normal acid-base conditions were maintained in the arterial blood. In rabbits fed a high-alkali diet, the excreted alkaline urine (pH(u) > 8.0) typically contained a large amount of precipitated carbonate, whereas in rabbits fed a low-alkali diet, both pH(u) and precipitate decreased considerably. During high-alkali feeding, application of NH4Cl likewise decreased pH(u), but arterial pH was still maintained with no indication of metabolic acidosis. During low-alkali feeding, a comparably small amount of added NH4Cl further lowered pH(u) and was accompanied by a significant systemic metabolic acidosis. We conclude that exhausted renal base-saving function by dietary alkali depletion is a prerequisite for growing susceptibility to NH4Cl-induced chronic metabolic acidosis in the herbivore rabbit.
Meta-heuristic algorithm to solve two-sided assembly line balancing problems
NASA Astrophysics Data System (ADS)
Wirawan, A. D.; Maruf, A.
2016-02-01
Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.
Gong, Li-gang; Yang, Wen-lun
2014-01-01
Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555
Michel, Manon; Kapfer, Sebastian C; Krauth, Werner
2014-02-07
In this article, we present an event-driven algorithm that generalizes the recent hard-sphere event-chain Monte Carlo method without introducing discretizations in time or in space. A factorization of the Metropolis filter and the concept of infinitesimal Monte Carlo moves are used to design a rejection-free Markov-chain Monte Carlo algorithm for particle systems with arbitrary pairwise interactions. The algorithm breaks detailed balance, but satisfies maximal global balance and performs better than the classic, local Metropolis algorithm in large systems. The new algorithm generates a continuum of samples of the stationary probability density. This allows us to compute the pressure and stress tensor as a byproduct of the simulation without any additional computations.
Li, Bai; Gong, Li-gang; Yang, Wen-lun
2014-01-01
Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms.
NASA Astrophysics Data System (ADS)
Harteveld, Casper
At many occasions we are asked to achieve a “balance” in our lives: when it comes, for example, to work and food. Balancing is crucial in game design as well as many have pointed out. In games with a meaningful purpose, however, balancing is remarkably different. It involves the balancing of three different worlds, the worlds of Reality, Meaning, and Play. From the experience of designing Levee Patroller, I observed that different types of tensions can come into existence that require balancing. It is possible to conceive of within-worlds dilemmas, between-worlds dilemmas, and trilemmas. The first, the within-world dilemmas, only take place within one of the worlds. We can think, for example, of a user interface problem which just relates to the world of Play. The second, the between-worlds dilemmas, have to do with a tension in which two worlds are predominantly involved. Choosing between a cartoon or a realistic style concerns, for instance, a tension between Reality and Play. Finally, the trilemmas are those in which all three worlds play an important role. For each of the types of tensions, I will give in this level a concrete example from the development of Levee Patroller. Although these examples come from just one game, I think the examples can be exemplary for other game development projects as they may represent stereotypical tensions. Therefore, to achieve harmony in any of these forthcoming games, it is worthwhile to study the struggles we had to deal with.
Physics Based Model for Cryogenic Chilldown and Loading. Part I: Algorithm
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Smelyanskiy, Vadim N.; Brown, Barbara
2014-01-01
We report the progress in the development of the physics based model for cryogenic chilldown and loading. The chilldown and loading is model as fully separated non-equilibrium two-phase flow of cryogenic fluid thermally coupled to the pipe walls. The solution follow closely nearly-implicit and semi-implicit algorithms developed for autonomous control of thermal-hydraulic systems developed by Idaho National Laboratory. A special attention is paid to the treatment of instabilities. The model is applied to the analysis of chilldown in rapid loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The numerical predictions are in reasonable agreement with the experimental time traces. The obtained results pave the way to the development of autonomous loading operation on the ground and space.
NASA Astrophysics Data System (ADS)
Sarwono, A. A.; Ai, T. J.; Wigati, S. S.
2017-01-01
Vehicle Routing Problem (VRP) is a method for determining the optimal route of vehicles in order to serve customers starting from depot. Combination of the two most important problems in distribution logistics, which is called the two dimensional loading vehicle routing problem, is considered in this paper. This problem combines the loading of the freight into the vehicles and the successive routing of the vehicles along the route. Moreover, an additional feature of last-in-first-out loading sequencesis also considered. In the sequential two dimensional loading capacitated vehicle routing problem (sequential 2L-CVRP), the loading must be compatible with the trip sequence: when the vehicle arrives at a customer i, there must be no obstacle (items for other customers) between the item of i and the loading door (rear part) of the vehicle. In other words, it is not necessary to move non-i’s items whenever the unloading process of the items of i. According with aforementioned conditions, a program to solve sequential 2L-CVRP is required. A nearest neighbor algorithm for solving the routing problem is presented, in which the loading component of the problem is solved through a collection of 5 packing heuristics.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Fawley, William M.
2002-07-01
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Kizilkaya, Elif A.; Gupta, Surendra M.
2005-11-01
In this paper, we compare the impact of different disassembly line balancing (DLB) algorithms on the performance of our recently introduced Dynamic Kanban System for Disassembly Line (DKSDL) to accommodate the vagaries of uncertainties associated with disassembly and remanufacturing processing. We consider a case study to illustrate the impact of various DLB algorithms on the DKSDL. The approach to the solution, scenario settings, results and the discussions of the results are included.
NASA Astrophysics Data System (ADS)
Esin, S. B.; Trifonov, N. N.; Sukhorukov, Yu. G.; Yurchenko, A. Yu.; Grigor'eva, E. B.; Snegin, I. P.; Zhivykh, D. A.; Medvedkin, A. V.; Ryabich, V. A.
2015-09-01
More than 30 power units of thermal power stations, based on the nondeaerating heat balance diagram, successfully operate in the former Soviet Union. Most of them are power units with a power of 300 MW, equipped with HTGZ and LMZ turbines. They operate according to a variable electric load curve characterized by deep reductions when undergoing night minimums. Additional extension of the range of power unit adjustment makes it possible to maintain the dispatch load curve and obtain profit for the electric power plant. The objective of this research is to carry out estimated and experimental processing of the operating regimes of the regeneration system of steam-turbine plants within the extended adjustment range and under the conditions when the constraints on the regeneration system and its equipment are removed. Constraints concerning the heat balance diagram that reduce the power unit efficiency when extending the adjustment range have been considered. Test results are presented for the nondeaerating heat balance diagram with the HTGZ turbine. Turbine pump and feed electric pump operation was studied at a power unit load of 120-300 MW. The reliability of feed pump operation is confirmed by a stable vibratory condition and the absence of cavitation noise and vibration at a frequency that characterizes the cavitation condition, as well as by oil temperature maintenance after bearings within normal limits. Cavitation performance of pumps in the studied range of their operation has been determined. Technical solutions are proposed on providing a profitable and stable operation of regeneration systems when extending the range of adjustment of power unit load. A nondeaerating diagram of high-pressure preheater (HPP) condensate discharge to the mixer. A regeneration system has been developed and studied on the operating power unit fitted with a deaeratorless thermal circuit of the system for removing the high-pressure preheater heating steam condensate to the mixer
He, Fulin; Cao, Yang; Zhang, Jun Jason; Wei, Jiaolong; Zhang, Yingchen; Muljadi, Eduard; Gao, Wenzhong
2016-11-21
Ensuring flexible and reliable data routing is indispensable for the integration of Advanced Metering Infrastructure (AMI) networks, we propose a secure-oriented and load-balancing wireless data routing scheme. A novel utility function is designed based on security routing scheme. Then, we model the interactive security-oriented routing strategy among meter data concentrators or smart grid meters as a mixed-strategy network formation game. Finally, such problem results in a stable probabilistic routing scheme with proposed distributed learning algorithm. One contributions is that we studied that different types of applications affect the routing selection strategy and the strategy tendency. Another contributions is that the chosen strategy of our mixed routing can adaptively to converge to a new mixed strategy Nash equilibrium (MSNE) during the learning process in the smart grid.
Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel
2010-09-30
Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights from the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.
NASA Astrophysics Data System (ADS)
Bunnoon, Pituk; Chalermyanont, Kusumal; Limsakul, Chusak
2010-02-01
This paper proposed the discrete transform and neural network algorithms to obtain the monthly peak load demand in mid term load forecasting. The mother wavelet daubechies2 (db2) is employed to decomposed, high pass filter and low pass filter signals from the original signal before using feed forward back propagation neural network to determine the forecasting results. The historical data records in 1997-2007 of Electricity Generating Authority of Thailand (EGAT) is used as reference. In this study, historical information of peak load demand(MW), mean temperature(Tmean), consumer price index (CPI), and industrial index (economic:IDI) are used as feature inputs of the network. The experimental results show that the Mean Absolute Percentage Error (MAPE) is approximately 4.32%. This forecasting results can be used for fuel planning and unit commitment of the power system in the future.
NASA Astrophysics Data System (ADS)
Pitakaso, Rapeepan; Sethanan, Kanchana
2016-02-01
This article proposes the differential evolution algorithm (DE) and the modified differential evolution algorithm (DE-C) to solve a simple assembly line balancing problem type 1 (SALBP-1) and SALBP-1 when the maximum number of machine types in a workstation is considered (SALBP-1M). The proposed algorithms are tested and compared with existing effective heuristics using various sets of test instances found in the literature. The computational results show that the proposed heuristics is one of the best methods, compared with the other approaches.
NASA Astrophysics Data System (ADS)
Sohrabi, Foad; Davidson, Timothy N.
2016-06-01
We consider the problem of power allocation for the single-cell multi-user (MU) multiple-input single-output (MISO) downlink with quality-of-service (QoS) constraints. The base station acquires an estimate of the channels and, for a given beamforming structure, designs the power allocation so as to minimize the total transmission power required to ensure that target signal-to-interference-and-noise ratios at the receivers are met, subject to a specified outage probability. We consider scenarios in which the errors in the base station's channel estimates can be modelled as being zero-mean and Gaussian. Such a model is particularly suitable for time division duplex (TDD) systems with quasi-static channels, in which the base station estimates the channel during the uplink phase. Under that model, we employ a precise deterministic characterization of the outage probability to transform the chance-constrained formulation to a deterministic one. Although that deterministic formulation is not convex, we develop a coordinate descent algorithm that can be shown to converge to a globally optimal solution when the starting point is feasible. Insight into the structure of the deterministic formulation yields approximations that result in coordinate update algorithms with good performance and significantly lower computational cost. The proposed algorithms provide better performance than existing robust power loading algorithms that are based on tractable conservative approximations, and can even provide better performance than robust precoding algorithms based on such approximations.
Genetic Algorithms for an Optimal Line Balancing Problem with Workers of Different Skill Levels
NASA Astrophysics Data System (ADS)
Iima, Hitoshi; Karuno, Yoshiyuki; Kise, Hiroshi
This paper discusses a new combinatorial optimization problem which occurs in line balancing for real assembly lines demanding skilled operations. On the contrast with conventional assembly lines such as automotive in which each operation is associated with a standard processing time, it is assumed that each operation time depends on assigned worker's skill and there exists an upper bound on the number of operations to be assigned to each worker. Three genetic algorithms (GAs) which have different genotypes and different decoding procedures are discussed for this problem. The genotype in the first GA is expressed by sequencing the operation numbers, and an effective heuristic rule is introduced into the decoding procedure. In the second GA, the genotype is expressed by sequencing the sets of operations to be assigned to each worker. In the third GA, the genotype is expressed by sequencing the worker numbers executing each operation in the order of operation numbers. These GAs are compared by numerical experiment based on real conditions.
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
2009-07-30
Investigation of Control Algorithms for Tracked Vehicle Mobility Load Emulation for a Combat Hybrid Electric Power System Jarrett Goodell and...TITLE AND SUBTITLE Investigation of Control Algorithms for Tracked Vehicle Mobility Load Emulation for a Combat Hybrid Electric Power System 5a...for ~ 22 ton tracked vehicle • Tested and Developed: – Motors, Generators, Batteries, Inverters, DC-DC Converters , Thermal Management, Pulse Power
Żak-Gołąb, Agnieszka; Rzemieniuk, Anna; Smętek, Joanna; Sordyl, Ryszard; Tyrka, Agata; Sosnowski, Maciej; Zahorska-Markiewicz, Barbara; Chudek, Jerzy; Olszanecka-Glinianowicz, Magdalena
2012-01-01
Introduction Oral water load may increase the energy expenditure (EE) by stimulation of sympathetic dependent thermogenesis. Thus, drinking of water may be helpful in weight reduction. The aim of the study is to assess the influence of water load on energy expenditure and sympathetic activity in obese and normal weight women. Material and methods Forty-five women were included. Energy expenditure was measured twice, in the morning and after oral water load, by the indirect calorimetric method. The heart rate variability parameters low frequency (LF), high frequency (HF), LF/HF index, standard deviation of normal RR intervals (SDNN) and root mean square difference among successive RR normal intervals (rMSSD) were used for the indirect assessment of the sympatho-vagal balance. Results Resting energy expenditure (REE) was significantly higher in obese than in normal weight women (1529 ±396 kcal/day vs. 1198 ±373 kcal/day; p = 0.02). In both study groups after water load EE increased significantly (by 20% and by 12%, corresponding to 8.6 kcal/h and 5.2 kcal/h respectively), while, LF/HF index increased simultaneously. The increase of energy expenditure (EE) did not exceed the energetic cost of water heating, from room to body temperature – 15 kcal/1000 ml. There was no correlation between changes of energy expenditure (EE) and heart rate variability (HRV) parameters. Conclusions The increase of EE induced by water load is mostly related to the heating of the consumed water to body temperature. The assessment of autonomic balance by means of standard HRV indices had been found insufficient for detection of actually operating mechanisms. PMID:23319974
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
Estimating sediment loads in an intra-Apennine catchments: balance between modeling and monitoring
NASA Astrophysics Data System (ADS)
Pelacani, Samanta; Cassi, Paola; Borselli, Lorenzo
2010-05-01
In this study we compare the results of a soil erosion model applied at watershed scale to the suspended sediment measured in the stream network affected by a motor way construction. A sediment delivery model is applied at watershed scale; the evaluation of sediment delivery is related to a connectivity fluxes index that describes the internal linkages between runoff and sediment sources in upper parts of catchments and the receiving sinks. An analysis of the fine suspended sediment transport and storage was conducted for a streams inlet of the Bilancino reservoir, a principal water supply of the city of Florence. The suspended sediment were collected from a section of river defined as a close systems using a time integrating suspended sediment sampling. The sediment deposited within the sampling traps was recovered after storm events and provide information of the overall contribution of the potential sediment sources. Hillslope gross erosion was assessed by a USLE-TYPE approach. A soil survey at 1:25.000 scale and a soil database was create to calculate, for each soil unit, the erodibility coefficient K using a new algorithm (Salvador Sanchis et al. 2007). Erosivity coefficient R was obtained applying geostatistical methods taking into account elevation and valley morphology. Furthermore, we evaluate a sediment delivery factor (SDR) for the entire watershed. This factor is used to correct the output of the USLE Type model. The innovative approach consist in a SDR factor variable in space and in time because it is related to a fluxes connectivity index IC (Borselli et al. 2008) based on the distribution of land use and topographic features. The aim of this study is to understand how the model simulates the real processes that intervene in the watershed and subsequently to calibrate the model with the result obtained from the monitoring of suspend sediment in the streams. From first results, it appears that human activities by highway construction, have resulted in
NASA Astrophysics Data System (ADS)
Siragusa, R.; Perret, E.; Nguyen, H. V.; Lemaître-Auger, P.; Tedjini, S.; Caloz, C.
2011-06-01
A fully automated tool for designing CRLH interdigital microstrip structures using a co-design synthesis computational approach is proposed and demonstrated experimentally. This approach uses an electromagnetic simulator in conjunction with a genetic algorithm to synthesize and optimize a balanced CRLH interdigital microstrip transmission line. The high sensitivity of a long balanced transmission line to fabrication tolerances is controlled by the use of a high precision 3D simulator. The 2.5D simulator used was found insufficient for a large number of unit cells. A 13 UC CRLH transmission line is designed with the proposed approach. The response sensitivity of the balanced transmission lines to the over/under-etching factor is highlighted by comparing the measurements of four lines with different factors. The effect of over/under-etching is significant for values larger than 10 μm.
Tom, Nathan M.; Yu, Yi-Hsiang; Wright, Alan D.; Lawson, Michael
2016-06-01
The aim of this paper is to describe how to control the power-to-load ratio of a novel wave energy converter (WEC) in irregular waves. The novel WEC that is being developed at the National Renewable Energy Laboratory combines an oscillating surge wave energy converter (OSWEC) with control surfaces as part of the structure; however, this work only considers one fixed geometric configuration. This work extends the optimal control problem so as to not solely maximize the time-averaged power, but to also consider the power-take-off (PTO) torque and foundation forces that arise because of WEC motion. The objective function of the controller will include competing terms that force the controller to balance power capture with structural loading. Separate penalty weights were placed on the surge-foundation force and PTO torque magnitude, which allows the controller to be tuned to emphasize either power absorption or load shedding. Results of this study found that, with proper selection of penalty weights, gains in time-averaged power would exceed the gains in structural loading while minimizing the reactive power requirement.
An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router
NASA Astrophysics Data System (ADS)
Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua
2016-10-01
Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.
Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming
NASA Astrophysics Data System (ADS)
Faruq Ibn Ibrahimy, Abdullah; Rafiqul, Islam Md; Anwar, Farhat; Ibn Ibrahimy, Muhammad
2013-12-01
The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper.
Robert, T; Chèze, L; Dumas, R; Verriest, J-P
2007-01-01
The joint forces and moments driving the motion of a human subject are classically computed by an inverse dynamic calculation. However, even if this process is theoretically simple, many sources of errors may lead to huge inaccuracies in the results. Moreover, a direct comparison with in vivo measured loads or with "gold standard" values from literature is only possible for very specific studies. Therefore, assessing the inaccuracy of inverse dynamic results is not a trivial problem and a simple method is still required. This paper presents a simple method to evaluate both: (1) the consistency of the results obtained by inverse dynamics; (2) the influence of possible modifications in the inverse dynamic hypotheses. This technique concerns recursive calculation performed on full kinematic chains, and consists in evaluating the loads obtained by two different recursive strategies. It has been applied to complex 3D whole body movements of balance recovery. A recursive Newton-Euler procedure was used to compute the net joint loads. Two models were used to represent the subject bodies, considering or not the upper body as a unique rigid segment. The inertial parameters of the body segments were estimated from two different sets of scaling equations [De Leva, P., 1996. Adjustments to Zatsiorsky-Suleyanov's segment inertia parameters. Journal of Biomechanics 29, 1223-1230; Dumas, R., Chèze, L., Verriest, J.-P., 2006b. Adjustments to McConville et al. and Young et al. Body Segment Inertial Parameters. Journal of Biomechanics, in press]. Using this comparison technique, it has been shown that, for the balance recovery motions investigated: (1) the use of the scaling equations proposed by Dumas et al., instead of those proposed by De Leva, improves the consistency of the results (average relative influence up to 30% for the transversal moment); (2) the arm motions dynamically influence the recovery motion in a non negligible way (average relative influence up to 15% and 30
Domain Decomposition and Load Balancing in the Amtran Neutron Transport Code
Compton, J; Clouse, C
2003-07-07
Effective spatial domain decomposition for discrete ordinate (Sn) neutron transport calculations has been critical for exploiting massively parallel architectures typified by the ASCI White computer at Lawrence Livermore National Laboratory. A combination of geometrical and computational constraints has posed a unique challenge as problems have been scaled up to several thousand processors. Carefully scripted decomposition and corresponding execution algorithms have been developed to handle a range of geometrical and hardware configurations.
NASA Technical Reports Server (NTRS)
Woods, Claudia M.; Brewe, David E.
1988-01-01
A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.
NASA Astrophysics Data System (ADS)
Koloch, Grzegorz; Kaminski, Bogumil
2010-10-01
In the paper we examine a modification of the classical Vehicle Routing Problem (VRP) in which shapes of transported cargo are accounted for. This problem, known as a three-dimensional VRP with loading constraints (3D-VRP), is appropriate when transported commodities are not perfectly divisible, but they have fixed and heterogeneous dimensions. In the paper restrictions on allowable cargo positionings are also considered. These restrictions are derived from business practice and they extended the baseline 3D-VRP formulation as considered by Koloch and Kaminski (2010). In particular, we investigate how additional restrictions influence relative performance of two proposed optimization algorithms: the nested and the joint one. Performance of both methods is compared on artificial problems and on a big-scale real life case study.
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm
Kobayashi, Yoko; Aiyoshi, Eitaro
2002-10-15
A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies such as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.
Balancing rotor speed regulation and drive train loads of floating wind turbines
NASA Astrophysics Data System (ADS)
Fischer, Boris; Loepelmann, Peter
2016-09-01
The interaction of the blade pitch controller with structural motion is particularly important for wind turbines mounted on floating platforms. A controls-based approach to overcome the related technical challenges is to feed back the nacelle's motion to the demanded generator torque. This work aims to further improve this approach by feeding back only a narrow fraction of the available frequency range. Simulations show that, in doing so, unrealistically high torque magnitudes are avoided, and better a trade-off between rotor speed regulation and drive train loads is achieved.
NASA Astrophysics Data System (ADS)
Dowdell, David C.; Matthews, G. Peter; Wells, Ian
Two globally averaged mass balance models have been developed to investigate the sensitivity and future level of atmospheric chlorine and bromine as a result of the emission of 14 chloro- and 3 bromo-carbons. The models use production, growth, lifetime and concentration data for each of the halocarbons and divide the production into one of eight uses, these being aerosol propellants, cleaning agents, blowing agents in open and closed cell foams, non-hermetic and hermetic refrigeration, fire retardants and a residual "other" category. Each use category has an associated emission profile which is built into the models to take into account the proportion of halocarbon retained in equipment for a characteristic period of time before its release. Under the Montreal Protocol 3 requirements, a peak chlorine loading of 3.8 ppb is attained in 1994, which does not reduce to 2.0 ppb (the approximate level of atmospheric chlorine when the ozone hole formed) until 2053. The peak bromine loading is 22 ppt, also in 1994, which decays to 12 ppt by the end of next century. The models have been used to (i) compare the effectiveness of Montreal Protocols 1, 2 and 3 in removing chlorine from the atmosphere, (ii) assess the influence of the delayed emission assumptions used in these models compared to immediate emission assumptions used in previous models, (iii) assess the relative effect on the chlorine loading of a tightening of the Montreal Protocol 3 restrictions, and (iv) calculate the influence of chlorine and bromine chemistry as well as the faster phase out of man-made methyl bromide on the bromine loading.
NASA Astrophysics Data System (ADS)
Xie, H.; Hendrickx, J.; Kurc, S.; Small, E.
2002-12-01
Evapotranspiration (ET) is one of the most important components of the water balance, but also one of the most difficult to measure. Field techniques such as soil water balances and Bowen ratio or eddy covariance techniques are local, ranging from point to field scale. SEBAL (Surface Energy Balance Algorithm for Land) is an image-processing model that calculates ET and other energy exchanges at the earth's surface. SEBAL uses satellite image data (TM/ETM+, MODIS, AVHRR, ASTER, and so on) measuring visible, near-infrared, and thermal infrared radiation. SEBAL algorithms predict a complete radiation and energy balance for the surface along with fluxes of sensible heat and aerodynamic surface roughness (Bastiaanssen et al, 1998; and Allen et al. 2001). We are constructing a GIS based database that includes spatially-distributed estimates of ET from remote-sensed data at a resolution of about 30 m. The SEBAL code will be optimized for this region via comparison of surface based observations of ET, reference ET (from windspeed, solar radiation, humidity, air temperature, and rainfall records), surface temperature, albedo, and so on. The observed data is collected at a series of tower in the middle Rio Grande Basin. The satellite image provides the instantaneous ET (ET_inst) only. Therefore, estimating 24 hour ET (ET_24) requires some assumptions. Two of these assumptions, which are (1) by assuming the instantaneous evaporative fraction (EF) is equal to the 24-hour averaged value, and (2) by assuming the instantaneous ETrF (same as `crop coefficient', and equal to instantaneous ET divided by instantaneous reference ET) is equal to the 24 hour averaged value, will be evaluated for the study area. Seasonal ET will be estimated by expanding the 24-hour ET proportionally to a reference ET that is derived from weather data. References: Bastiaanssen,W.G.M., M.Menenti, R.A. Feddes, and A.A.M. Holtslag, 1998, A remote sensing surface energy balance algorithm for land (SEBAL): 1
NASA Astrophysics Data System (ADS)
Chen, M.; Senay, G. B.; Verdin, J. P.; Rowland, J.
2014-12-01
Current regional to global and daily to annual Evapotranspiration ( ET) estimation mainly relies on surface energy balance (SEB) ET models or statistical empirical methods driven by remote sensing data and various meteorology databases. However, these ET models face challenging issues—large uncertainties from inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at globally available FLUXNET tower sites provide a feasible opportunity to assess the ET modelling uncertainties. In this study, we focused on uncertainty analysis on an operational simplified surface energy balance (SSEBop) algorithm for ET estimation at multiple Ameriflux tower sites with diverse land cover characteristics and climatic conditions. The input land surface temperature (LST) data of the algorithm were adopted from the 8-day composite1-km Moderate Resolution Imaging Spectroradiometer (MODIS) land surface temperature. The other input data were taken from the Ameriflux database. Results of statistical analysis indicated that uncertainties or random errors from input variables and parameters of SSEBop led to daily and seasonal ET estimates with relative errors around 20% across multiple flux tower sites distributed across different biomes. This uncertainty of SSEBop lies in the error range of 20-30% of similar SEB-based ET algorithms, such as, Surface Energy Balance System and Surface Energy Balance Algorithm for Land. The R2 between daily and seasonal ET estimates by SSEBop and ET eddy covariance measurements at multiple Ameriflux tower sites exceed 0.7, and even up to 0.9 for croplands, grasslands, and forests, suggesting systematic error or bias of the SSEBop is acceptable. In summary, the uncertainty assessment verifies that the SSEBop is a reliable method for wide-area ET calculation and especially useful for detecting drought years and relative drought severity for agricultural production
Effects of nutrient loading on the carbon balance of coastal wetland sediments
Morris, J.T.; Bradley, P.M.
1999-01-01
Results of a 12-yr study in an oligotrophic South Carolina salt marsh demonstrate that soil respiration increased by 795 g C m-2 yr-1 and that carbon inventories decreased in sediments fertilized with nitrogen and phosphorus. Fertilized plots became net sources of carbon to the atmosphere, and sediment respiration continues in these plots at an accelerated pace. After 12 yr of treatment, soil macroorganic matter in the top 5 cm of sediment was 475 g C m-2 lower in fertilized plots than in controls, which is equivalent to a constant loss rate of 40 g C m-2 yr-1. It is not known whether soil carbon in fertilized plots has reached a new equilibrium or continues to decline. The increase in soil respiration in the fertilized plots was far greater than the loss of sediment organic matter, which indicates that the increase in soil respiration was largely due to an increase in primary production. Sediment respiration in laboratory incubations also demonstrated positive effects of nutrients. Thus, the results indicate that increased nutrient loading of oligotrophic wetlands can lead to an increased rate of sediment carbon turnover and a net loss of carbon from sediments.
NASA Technical Reports Server (NTRS)
Brooks, D. R.; Harrison, E. F.; Minnis, P.; Suttles, J. T.; Kandel, R. S.
1986-01-01
A brief description is given of how temporal and spatial variability in the earth's radiative behavior influences the goals of satellite radiation monitoring systems and how some previous systems have addressed the existing problems. Then, results of some simulations of radiation budget monitoring missions are presented. These studies led to the design of the Earth Radiation Budget Experiment (ERBE). A description is given of the temporal and spatial averaging algorithms developed for the ERBE data analysis. These algorithms are intended primarily to produce monthly averages of the net radiant exitance on regional, zonal, and global scales and to provide insight into the regional diurnal variability of radiative parameters such as albedo and long-wave radiant exitance. The algorithms are applied to scanner and nonscanner data for up to three satellites. Modeling of dialy shortwave albedo and radiant exitance with satellite samling that is insufficient to fully account for changing meteorology is discussed in detail. Studies performed during the ERBE mission and software design are reviewed. These studies provide quantitative estimates of the effects of temporally sparse and biased sampling on inferred diurnal and regional radiative parameters. Other topics covered include long-wave diurnal modeling, extraction of a regional monthly net clear-sky radiation budget, the statistical significance of observed diurnal variability, quality control of the analysis, and proposals for validating the results of ERBE time and space averaging.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Shabani, Hamed; Vahidi, Behrooz; Ebrahimpour, Majid
2013-01-01
A new PID controller for resistant differential control against load disturbance is introduced that can be used for load frequency control (LFC) application. Parameters of the controller have been specified by using imperialist competitive algorithm (ICA). Load disturbance, which is due to continuous and rapid changes of small loads, is always a problem for load frequency control of power systems. This paper introduces a new method to overcome this problem that is based on filtering technique which eliminates the effect of this kind of disturbance. The object is frequency regulation in each area of the power system and decreasing of power transfer between control areas, so the parameters of the proposed controller have been specified in a wide range of load changes by means of ICA to achieve the best dynamic response of frequency. To evaluate the effectiveness of the proposed controller, a three-area power system is simulated in MATLAB/SIMULINK. Each area has different generation units, so utilizes controllers with different parameters. Finally a comparison between the proposed controller and two other prevalent PI controllers, optimized by GA and Neural Networks, has been done which represents advantages of this controller over others.
Zamani, Abbasali; Barakati, S Masoud; Yousofi-Darmian, Saeed
2016-09-01
Load-frequency control is one of the most important issues in power system operation. In this paper, a Fractional Order PID (FOPID) controller based on Gases Brownian Motion Optimization (GBMO) is used in order to mitigate frequency and exchanged power deviation in two-area power system with considering governor saturation limit. In a FOPID controller derivative and integrator parts have non-integer orders which should be determined by designer. FOPID controller has more flexibility than PID controller. The GBMO algorithm is a recently introduced search method that has suitable accuracy and convergence rate. Thus, this paper uses the advantages of FOPID controller as well as GBMO algorithm to solve load-frequency control. However, computational load will higher than conventional controllers due to more complexity of design procedure. Also, a GBMO based fuzzy controller is designed and analyzed in detail. The performance of the proposed controller in time domain and its robustness are verified according to comparison with other controllers like GBMO based fuzzy controller and PI controller that used for load-frequency control system in confronting with model parameters variations.
NASA Astrophysics Data System (ADS)
Xiao, Dongyi
Scope and method of study. A systematic validation of the ASHRAE heat balance based residential cooling load calculation procedure (RHB) has been performed with inter-model comparison, analytical verification and experimental validation. The inter-model validation was performed using ESP-r as the reference model. The testing process was automated through parametric generation and simulation of large sets of test cases for both RHB and ESP-r. The house prototypes covered include a simple Shoebox prototype and a real 4-bedroom house prototype. An analytical verification test suite for building fabric models of whole building energy simulation programs has been developed. The test suite consists of a series of sixteen tests covering convection, conduction, solar irradiation, long-wave radiation, infiltration and ground-coupled floors. Using the test suite, a total of twelve analytical tests have been done with the RHB procedure. The experimental validation has been conducted using experimental data collected from a Cardinal Project house located in Fort Wayne, Indiana. During the diagnostic process of the experimental validation, comparisons have also been made between ESP-r simulation results and experimental data. Findings and conclusions. It is concluded RHB is acceptable as a design tool on a typical North American house. Analytical tests confirmed the underlying mechanisms for modeling basic heat transfer phenomena in building fabric. The inter-model comparison showed that the differences found between RHB and ESP-r can be traced to the differences in sub-models used by RHB and ESP-r. It also showed that the RHB-designed systems can meet the design criteria and that the RHB temperature swing option is helpful in reducing system over-sizing. The experimental validation demonstrated that the systems designed with the method will have adequate size to meet the room temperatures specified in the design, whether or not swing is utilized. However, actual system
NASA Astrophysics Data System (ADS)
Tsuzuki, Satori; Aoki, Takayuki
2016-04-01
Numerical simulations for debris flows including a countless of objects is one of important topics in fluid dynamics and many engineering applications. Particle-based method is a promising approach to carry out the simulations for flows interacting with objects. In this paper, we propose an efficient method to realize a large-scale simulation for fluid-structure interaction by combining SPH (Smoothed Particle Hydrodynamics) method for fluid with DEM (Discrete Element Method) for objects on a multi-GPU system. By applying space filling curves to decomposition of the computational domain, we are able to contain the same number of particles in each decomposed domain. In our implementation, several techniques for particle counting and data movement have been introduced. Fragmentation of the memory used for particles happens during the time-integration and the frequency of de-fragmentation is examined by taking account for computational load balance and the communication cost between CPU and GPU. A link-list technique of the particle interaction is introduced to save the memory drastically. It is found that the sorting of particle data for the neighboring particle list using linked-list method improves the memory access greatly with a certain interval. The weak and strong scalabilities for a SPH simulation using 111 Million particles was measured from 4 GPUs to 512 GPUs for three types of space filling curves. A large-scale debris flow simulation of tsunami with 10,368 floating rubbles using 117 Million particles were successfully carried out with 256 GPUs on the TSUBAME 2.5 supercomputer at Tokyo Institute of Technology.
ERIC Educational Resources Information Center
Ngu, Bing Hiong; Phan, Huy Phuong
2016-01-01
We examined the use of balance and inverse methods in equation solving. The main difference between the balance and inverse methods lies in the operational line (e.g. +2 on both sides vs -2 becomes +2). Differential element interactivity favours the inverse method because the interaction between elements occurs on both sides of the equation for…
Akbari, H.; Rainer, L.; Heinemeier, K.; Huang, J.; Franconi, E.
1993-01-01
The Southern California Edison Company (SCE) has conducted an extensive metering project in which electricity end use in 53 commercial buildings in Southern California has been measured. The building types monitored include offices, retail stores, groceries, restaurants, and warehouses. One year (June 1989 through May 1990) of the SCE measured hourly end-use data are reviewed in this report. Annual whole-building and end-use energy use intensities (EUIs) and monthly load shapes (LSs) have been calculated for the different building types based on the monitored data. This report compares the monitored buildings' EUIs and LSs to EUIs and LSs determined using whole-building load data and the End-Use Disaggregation Algorithm (EDA). Two sets of EDA determined EUIs and LSs are compared to the monitored data values. The data sets represent: (1) average buildings in the SCE service territory and (2) specific buildings that were monitored.
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.
1979-01-01
The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.
NASA Astrophysics Data System (ADS)
Rainieri, Carlo; Fabbrocino, Giovanni
2015-08-01
In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction
Bahat, Oded; Sullivan, Richard M
2010-05-01
Immediate loading of dental implants has become a widely reported practice with success rates ranging from 70.8% to 100%. Although most studies have considered implant survival to be the only measure of success, a better definition includes the long-term stability of the hard and soft tissues around the implant(s) and other adjacent structures, as well as the long-term stability of all the restorative components. The parameters identified in 1981 by Albrektsson and colleagues as influencing the establishment and maintenance of osseointegration have been reconsidered in relation to immediate loading to improve the chances of achieving such success. Two of the six parameters (status of the bone/implant site and implant loading conditions) have preoperative diagnostic implications, whereas three (implant design, surgical technique, and implant finish) may compensate for less-than-ideal site and loading conditions. Factors affecting the outcome of immediate loading are reviewed to assist clinicians attempting to assess its risks and benefits.
Distributed Load Shedding over Directed Communication Networks with Time Delays
Yang, Tao; Wu, Di
2016-07-25
When generation is insufficient to support all loads under emergencies, effective and efficient load shedding needs to be deployed in order to maintain the supply-demand balance. This paper presents a distributed load shedding algorithm, which makes efficient decision based on the discovered global information. In the global information discovery process, each load only communicates with its neighboring load via directed communication links possibly with arbitrarily large but bounded time varying communication delays. We propose a novel distributed information discovery algorithm based on ratio consensus. Simulation results are used to validate the proposed method.
Budde, R A; Crenshaw, T D
2003-01-01
The effects of chronic dietary acid loads on shifts in bone mineral reserves and physiological concentrations of cations and anions in extracellular fluids were assessed in growing swine. Four trials were conducted with a total of 38 (8.16 +/- 0.30 kg, mean +/- SEM) Large White x Landrace x Duroc pigs randomly assigned to one of three dietary treatments. Semipurified diets, fed for 13 to 17 d, provided an analyzed dietary electrolyte balance (dEB, meq/kg diet = Na+ + K+ - Cl-) of -35, 112, and 212 for the acidogenic, control, and alkalinogenic diets, respectively. Growth performance, arterial blood gas, serum chemistry, urine pH, mineral balance, bone mineral content gain, bone-breaking strength, bone ash, and percentage of bone ash were determined. Dietary treatments created a range of metabolic acid loads without affecting (P > 0.10) growth or feed intake. Urine pH was 5.71, 6.02, and 7.65 +/- 0.48 (mean +/- SEM) and arterial blood pH was 7.478, 7.485, and 7.526 +/- 0.006 for pigs fed acidogenic, control, and alkalinogenic treatments, respectively. A lower dEB resulted in an increased (P < 0.001) apparent Cl- retention (106.6, 55.4, and 41.2 +/- 6.3 meq/d), of which only 1.6% was accounted for by expansion of the extracellular fluid Cl- pool as calculated from serum Cl- (105.5, 103.4, 101.6 +/- 0.94 meq/L (mean +/- SEM) for pigs fed acidogenic, control, and alkalinogenic treatments, respectively. A lower dEB did not decrease (P > 0.10) bone mineral content gain, bone-breaking strength, bone ash, percentage of bone ash, or calcium and phosphate balance. In conclusion, bone mineral (phosphate) was not depleted to buffer the dietary acid load in growing pigs over a 3-wk period.
NASA Technical Reports Server (NTRS)
Schuster, David M.; Panda, Jayanta; Ross, James C.; Roozeboom, Nettie H.; Burnside, Nathan J.; Ngo, Christina L.; Kumagai, Hiro; Sellers, Marvin; Powell, Jessica M.; Sekula, Martin K.; Piatak, David J.
2016-01-01
This NESC assessment examined the accuracy of estimating buffet loads on in-line launch vehicles without booster attachments using sparse unsteady pressure measurements. The buffet loads computed using sparse sensor data were compared with estimates derived using measurements with much higher spatial resolution. The current method for estimating launch vehicle buffet loads is through wind tunnel testing of models with approximately 400 unsteady pressure transducers. Even with this relatively large number of sensors, the coverage can be insufficient to provide reliable integrated unsteady loads on vehicles. In general, sparse sensor spacing requires the use of coherence-length-based corrections in the azimuthal and axial directions to integrate the unsteady pressures and obtain reasonable estimates of the buffet loads. Coherence corrections have been used to estimate buffet loads for a variety of launch vehicles with the assumption methodology results in reasonably conservative loads. For the Space Launch System (SLS), the first estimates of buffet loads exceeded the limits of the vehicle structure, so additional tests with higher sensor density were conducted to better define the buffet loads and possibly avoid expensive modifications to the vehicle design. Without the additional tests and improvements to the coherence-length analysis methods, there would have been significant impacts to the vehicle weight, cost, and schedule. If the load estimates turn out to be too low, there is significant risk of structural failure of the vehicle. This assessment used a combination of unsteady pressure-sensitive paint (uPSP), unsteady pressure transducers, and a dynamic force and moment balance to investigate the integration schemes used with limited unsteady pressure data by comparing them with direct integration of extremely dense fluctuating pressure measurements. An outfall of the assessment was to evaluate the potential of using the emerging uPSP technique in a production
A distributed scheduling algorithm for heterogeneous real-time systems
NASA Technical Reports Server (NTRS)
Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi
1991-01-01
Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.
Optimal Load Control via Frequency Measurement and Neighborhood Area Communication
Zhao, CH; Topcu, U; Low, SH
2013-11-01
We propose a decentralized optimal load control scheme that provides contingency reserve in the presence of sudden generation drop. The scheme takes advantage of flexibility of frequency responsive loads and neighborhood area communication to solve an optimal load control problem that balances load and generation while minimizing end-use disutility of participating in load control. Local frequency measurements enable individual loads to estimate the total mismatch between load and generation. Neighborhood area communication helps mitigate effects of inconsistencies in the local estimates due to frequency measurement noise. Case studies show that the proposed scheme can balance load with generation and restore the frequency within seconds of time after a generation drop, even when the loads use a highly simplified power system model in their algorithms. We also investigate tradeoffs between the amount of communication and the performance of the proposed scheme through simulation-based experiments.
2006-03-01
normal forces (Z) and pitching moments (m). The strain gauges have been glued onto the balance using a bonding material designated M-BOND 6003...compound designated M-coat C3, a solvent-thinned (naptha) RTV (room temperature vulcanizing ) silicone rubber. Care was taken to not use excessive...70º delta wing at high angles of attack and sideslip. Masters Thesis, Aeronautical Engineering Department, The Wichita State University, USA
NASA Astrophysics Data System (ADS)
Gharehbaghi, Sadjad; Khatibinia, Mohsen
2015-03-01
A reliable seismic-resistant design of structures is achieved in accordance with the seismic design codes by designing structures under seven or more pairs of earthquake records. Based on the recommendations of seismic design codes, the average time-history responses (ATHR) of structure is required. This paper focuses on the optimal seismic design of reinforced concrete (RC) structures against ten earthquake records using a hybrid of particle swarm optimization algorithm and an intelligent regression model (IRM). In order to reduce the computational time of optimization procedure due to the computational efforts of time-history analyses, IRM is proposed to accurately predict ATHR of structures. The proposed IRM consists of the combination of the subtractive algorithm (SA), K-means clustering approach and wavelet weighted least squares support vector machine (WWLS-SVM). To predict ATHR of structures, first, the input-output samples of structures are classified by SA and K-means clustering approach. Then, WWLS-SVM is trained with few samples and high accuracy for each cluster. 9- and 18-storey RC frames are designed optimally to illustrate the effectiveness and practicality of the proposed IRM. The numerical results demonstrate the efficiency and computational advantages of IRM for optimal design of structures subjected to time-history earthquake loads.
NASA Astrophysics Data System (ADS)
Norton, C. G.; Petermann, M.; Fieback, T. M.
2017-04-01
Determination of mass increase or decrease of very small amplitude is a task which goes hand in hand with gravimetric adsorption and absorption measurement and thermogravimetry. Samples are subjected to various process conditions and as such can experience a change in mass, i.e. when adsorbing gas from the process atmosphere, or can decrease in mass, such as when being dried or when thermal decomposition takes place. Current instruments used for such analysis, especially at high pressures, are often based on magnetic suspension balances, and have a maximum mass resolution of a few 10‑6 g. This necessitates more often than not quite significant sample quantities, which can sometimes not easily be manufactured, e.g. in the case of metal organic framework adsorbents, or which in other cases do not have a sufficient specific surface area resulting in low measuring effect. A new apparatus based on a high resolution thermogravimetric analyser has been developed. This new apparatus combines very high resolution of up to a few 10‑8 g with a relatively high sample mass of up to 1.5 g, whilst eliminating many of the disadvantages of the microbalances previously used in magnetic suspension balances. An interface was developed which permits free configuration of the new balance as top or bottom loading. Validation measurements of known adsorbents were subsequently performed, with sample quantities up to a factor of 174 smaller than in literature.
Tom, Nathan M.; Madhi, Farshad; Yeung, Ronald W.
2016-06-24
The aim of this paper is to maximize the power-to-load ratio of the Berkeley Wedge: a one-degree-of-freedom, asymmetrical, energy-capturing, floating breakwater of high performance that is relatively free of viscosity effects. Linear hydrodynamic theory was used to calculate bounds on the expected time-averaged power (TAP) and corresponding surge restraining force, pitch restraining torque, and power take-off (PTO) control force when assuming that the heave motion of the wave energy converter remains sinusoidal. This particular device was documented to be an almost-perfect absorber if one-degree-of-freedom motion is maintained. The success of such or similar future wave energy converter technologies would require the development of control strategies that can adapt device performance to maximize energy generation in operational conditions while mitigating hydrodynamic loads in extreme waves to reduce the structural mass and overall cost. This paper formulates the optimal control problem to incorporate metrics that provide a measure of the surge restraining force, pitch restraining torque, and PTO control force. The optimizer must now handle an objective function with competing terms in an attempt to maximize power capture while minimizing structural and actuator loads. A penalty weight is placed on the surge restraining force, pitch restraining torque, and PTO actuation force, thereby allowing the control focus to be placed either on power absorption or load mitigation. Thus, in achieving these goals, a per-unit gain in TAP would not lead to a greater per-unit demand in structural strength, hence yielding a favorable benefit-to-cost ratio. Demonstrative results in the form of TAP, reactive TAP, and the amplitudes of the surge restraining force, pitch restraining torque, and PTO control force are shown for the Berkeley Wedge example.
Tom, Nathan M.; Madhi, Farshad; Yeung, Ronald W.
2016-07-01
The aim of this paper is to maximize the power-to-load ratio of the Berkeley Wedge: a one-degree-of-freedom, asymmetrical, energy-capturing, floating breakwater of high performance that is relatively free of viscosity effects. Linear hydrodynamic theory was used to calculate bounds on the expected time-averaged power (TAP) and corresponding surge restraining force, pitch restraining torque, and power take-off (PTO) control force when assuming that the heave motion of the wave energy converter remains sinusoidal. This particular device was documented to be an almost-perfect absorber if one-degree-of-freedom motion is maintained. The success of such or similar future wave energy converter technologies would require the development of control strategies that can adapt device performance to maximize energy generation in operational conditions while mitigating hydrodynamic loads in extreme waves to reduce the structural mass and overall cost. This paper formulates the optimal control problem to incorporate metrics that provide a measure of the surge restraining force, pitch restraining torque, and PTO control force. The optimizer must now handle an objective function with competing terms in an attempt to maximize power capture while minimizing structural and actuator loads. A penalty weight is placed on the surge restraining force, pitch restraining torque, and PTO actuation force, thereby allowing the control focus to be placed either on power absorption or load mitigation. Thus, in achieving these goals, a per-unit gain in TAP would not lead to a greater per-unit demand in structural strength, hence yielding a favorable benefit-to-cost ratio. Demonstrative results in the form of TAP, reactive TAP, and the amplitudes of the surge restraining force, pitch restraining torque, and PTO control force are shown for the Berkeley Wedge example.
NASA Astrophysics Data System (ADS)
Tuyen, Nguyen Duc; Fujita, Goro; Yokoyama, Ryuichi; Koyanagi, Kaoru; Funabashi, Toshihisa; Nomura, Masakatsu
That ever increasing electricity consumption, progress in power deregulation, and rising public awareness for environment have created more interest in fuel cell distributed generation. Among different types of fuel cells, solid oxide fuel cells (SOFCs) manifest themselves as great potential applications due to many advantages such as low emission, high efficiency, and high power rating. On the other hand, SOFC systems are beneficial because they can convert fuel such as natural gas (almost CH4) which is supplied by widespread systems in many countries into electricity efficiently using internal reforming. In facts, the load demand changes flexibly and fuel cell life time decreases by rapid thermal change. Its lifetime may be extended by maintaining in appropriate temperature. Therefore, it is important to acquire the load following performance as well as control of operation temperature. This paper addresses components of the simple SOFC power unit model with heat exchanger (HX) included. Typical dynamical submodels are used to follow the variation of load demand at a local location that considers temperature characteristics using the Matlab-SIMULINK program.
Moll, Karin; Roces, Flavio; Federle, Walter
2010-07-01
Grass-cutting ants (Atta vollenweideri) carry leaf fragments several times heavier and longer than the workers themselves over considerable distances back to their nest. Workers transport fragments in an upright, slightly backwards-tilted position. To investigate how they maintain stability and control the carried fragment's position, we measured head and fragment positions from video recordings. Load-transporting ants often fell over, demonstrating the biomechanical difficulty of this behavior. Long fragments were carried at a significantly steeper angle than short fragments of the same mass. Workers did not hold fragments differently between the mandibles, but performed controlled up and down head movements at the neck joint. By attaching additional mass at the fragment's tip to load-carrying ants, we demonstrated that they are able to adjust the fragment angle. When we forced ants to transport loads across inclines, workers walking uphill carried fragments at a significantly steeper angle, and downhill at a shallower angle than ants walking horizontally. However, we observed similar head movements in unladen workers, indicating a generalized reaction to slopes that may have other functions in addition to maintaining stability. Our results underline the importance of proximate, biomechanical factors for the understanding of the foraging process in leaf-cutting ants.
Li, Bai; Chiong, Raymond; Lin, Mu
2015-02-01
Protein structure prediction is a fundamental issue in the field of computational molecular biology. In this paper, the AB off-lattice model is adopted to transform the original protein structure prediction scheme into a numerical optimization problem. We present a balance-evolution artificial bee colony (BE-ABC) algorithm to address the problem, with the aim of finding the structure for a given protein sequence with the minimal free-energy value. This is achieved through the use of convergence information during the optimization process to adaptively manipulate the search intensity. Besides that, an overall degradation procedure is introduced as part of the BE-ABC algorithm to prevent premature convergence. Comprehensive simulation experiments based on the well-known artificial Fibonacci sequence set and several real sequences from the database of Protein Data Bank have been carried out to compare the performance of BE-ABC against other algorithms. Our numerical results show that the BE-ABC algorithm is able to outperform many state-of-the-art approaches and can be effectively employed for protein structure optimization.
Vandamme, Elke; Wissuwa, Matthias; Rose, Terry; Dieng, Ibnou; Drame, Khady N; Fofana, Mamadou; Senthilkumar, Kalimuthu; Venuprasad, Ramaiah; Jallow, Demba; Segda, Zacharie; Suriyagoda, Lalith; Sirisena, Dinarathna; Kato, Yoichiro; Saito, Kazuki
2016-01-01
More than 60% of phosphorus (P) taken up by rice (Oryza spp.) is accumulated in the grains at harvest and hence exported from fields, leading to a continuous removal of P. If P removed from fields is not replaced by P inputs then soil P stocks decline, with consequences for subsequent crops. Breeding rice genotypes with a low concentration of P in the grains could be a strategy to reduce maintenance fertilizer needs and slow soil P depletion in low input systems. This study aimed to assess variation in grain P concentrations among rice genotypes across diverse environments and evaluate the implications for field P balances at various grain yield levels. Multi-location screening experiments were conducted at different sites across Africa and Asia and yield components and grain P concentrations were determined at harvest. Genotypic variation in grain P concentration was evaluated while considering differences in P supply and grain yield using cluster analysis to group environments and boundary line analysis to determine minimum grain P concentrations at various yield levels. Average grain P concentrations across genotypes varied almost 3-fold among environments, from 1.4 to 3.9 mg g(-1). Minimum grain P concentrations associated with grain yields of 150, 300, and 500 g m(-2) varied between 1.2 and 1.7, 1.3 and 1.8, and 1.7 and 2.2 mg g(-1) among genotypes respectively. Two genotypes, Santhi Sufaid and DJ123, were identified as potential donors for breeding for low grain P concentration. Improvements in P balances that could be achieved by exploiting this genotypic variation are in the range of less than 0.10 g P m(-2) (1 kg P ha(-1)) in low yielding systems, and 0.15-0.50 g P m(-2) (1.5-5.0 kg P ha(-1)) in higher yielding systems. Improved crop management and alternative breeding approaches may be required to achieve larger reductions in grain P concentrations in rice.
Vandamme, Elke; Wissuwa, Matthias; Rose, Terry; Dieng, Ibnou; Drame, Khady N.; Fofana, Mamadou; Senthilkumar, Kalimuthu; Venuprasad, Ramaiah; Jallow, Demba; Segda, Zacharie; Suriyagoda, Lalith; Sirisena, Dinarathna; Kato, Yoichiro; Saito, Kazuki
2016-01-01
More than 60% of phosphorus (P) taken up by rice (Oryza spp.) is accumulated in the grains at harvest and hence exported from fields, leading to a continuous removal of P. If P removed from fields is not replaced by P inputs then soil P stocks decline, with consequences for subsequent crops. Breeding rice genotypes with a low concentration of P in the grains could be a strategy to reduce maintenance fertilizer needs and slow soil P depletion in low input systems. This study aimed to assess variation in grain P concentrations among rice genotypes across diverse environments and evaluate the implications for field P balances at various grain yield levels. Multi-location screening experiments were conducted at different sites across Africa and Asia and yield components and grain P concentrations were determined at harvest. Genotypic variation in grain P concentration was evaluated while considering differences in P supply and grain yield using cluster analysis to group environments and boundary line analysis to determine minimum grain P concentrations at various yield levels. Average grain P concentrations across genotypes varied almost 3-fold among environments, from 1.4 to 3.9 mg g−1. Minimum grain P concentrations associated with grain yields of 150, 300, and 500 g m−2 varied between 1.2 and 1.7, 1.3 and 1.8, and 1.7 and 2.2 mg g−1 among genotypes respectively. Two genotypes, Santhi Sufaid and DJ123, were identified as potential donors for breeding for low grain P concentration. Improvements in P balances that could be achieved by exploiting this genotypic variation are in the range of less than 0.10 g P m−2 (1 kg P ha−1) in low yielding systems, and 0.15–0.50 g P m−2 (1.5–5.0 kg P ha−1) in higher yielding systems. Improved crop management and alternative breeding approaches may be required to achieve larger reductions in grain P concentrations in rice. PMID:27729916
Roghani, Tayebeh; Torkaman, Giti; Movasseghe, Shafieh; Hedayati, Mehdi; Goosheh, Babak; Bayat, Noushin
2013-02-01
The aim of this study is to evaluate the effect of submaximal aerobic exercise with and without external loading on bone metabolism and balance in postmenopausal women with osteoporosis (OP). Thirty-six volunteer, sedentary postmenopausal women with OP were randomly divided into three groups: aerobic, weighted vest, and control. Exercise for the aerobic group consisted of 18 sessions of submaximal treadmill walking, 30 min daily, 3 times a week. The exercise program for the weighted-vest group was identical to that of the aerobic group except that the subjects wore a weighted vest (4-8 % of body weight). Body composition, bone biomarkers, bone-specific alkaline phosphatase (BALP) and N-terminal telopeptide of type 1 collagen (NTX), and balance (near tandem stand, NTS, and star-excursion, SE) were measured before and after the 6-week exercise program. Fat decreased (p = 0.01) and fat-free mass increased (p = 0.005) significantly in the weighted-vest group. BALP increased and NTX decreased significantly in both exercise groups (p ≤ 0.05). After 6 weeks of exercise, NTS score increased in the exercise groups and decreased in the control group (aerobic: +49.68 %, weighted vest: +104.66 %, and control: -28.96 %). SE values for all directions increased significantly in the weighted-vest group. Results showed that the two exercise programs stimulate bone synthesis and decrease bone resorption in postmenopausal women with OP, but that exercise while wearing a weighted vest is better for improving balance.
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-07-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
Parallel algorithms for dynamically partitioning unstructured grids
Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.
1994-10-01
Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Bader, Jon B.
2009-01-01
Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.
Research on Routing Selection Algorithm Based on Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna
The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.
Rocha-Gutiérrez, Beatriz A; Lee, Wen-Yee; Shane Walker, W
2016-01-01
A mass loading and mass balance analysis was performed on selected polybromodiphenyl ethers (PBDEs) in the first full-scale indirect potable reuse treatment plant in the United States. Chemical analysis of PBDEs was performed using an environmentally friendly sample preparation technique, called stir-bar sorptive extraction (SBSE), coupled with thermal desorption and gas chromatography/mass spectrometry (GC/MS). The three most dominant PBDEs found in all the samples were: BDE-47, BDE-99 and BDE-100. In the wastewater influent, the concentrations of studied PBDEs ranged from 94 to 775 ng/L, and in the effluent, the levels were below the detection limit. Concentrations in sludge ranged from 50 to 182 ng/g. In general, a removal efficiency of 92-96% of the PBDEs in the plant was accomplished through primary and secondary processes. The tertiary treatment process was able to effectively reduce the aforementioned PBDEs to less than 10 ng/L (>96% removal efficiency) in the effluent. If PBDEs remain in the treated wastewater effluent, they may pose environmental and health impacts through aquifer recharge, irrigation, and sludge final disposal.
Optimal Hops-Based Adaptive Clustering Algorithm
NASA Astrophysics Data System (ADS)
Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong
This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.
Better Bonded Ethernet Load Balancing
Gabler, Jason
2006-09-29
When a High Performance Storage System's mover shuttles large amounts of data to storage over a single Ethernet device that single channel can rapidly become saturated. Using Linux Ethernet channel bonding to address this and similar situations was not, until now, a viable solution. The various modes in which channel bonding could be configured always offered some benefit but only under strict conditions or at a system resource cost that was greater than the benefit gained by using channel bonding. Newer bonding modes designed by various networking hardware companies, helpful in such networking scenarios, were already present in their own switches. However, Linux-based systems were unable to take advantage of those new modes as they had not yet been implemented in the Linux kernel bonding driver. So, except for basic fault tolerance, Linux channel bonding could not positively combine separate Ethernet devices to provide the necessary bandwidth.
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of watershed-scale nutrient load estimates...
Automatic force balance calibration system
NASA Technical Reports Server (NTRS)
Ferris, Alice T. (Inventor)
1996-01-01
A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.
Akbari, H.; Rainer, L.; Heinemeier, K.; Huang, J.; Franconi, E.
1993-01-01
The Southern California Edison Company (SCE) has conducted an extensive metering project in which electricity end use in 53 commercial buildings in Southern California has been measured. The building types monitored include offices, retail stores, groceries, restaurants, and warehouses. One year (June 1989 through May 1990) of the SCE measured hourly end-use data are reviewed in this report. Annual whole-building and end-use energy use intensities (EUIs) and monthly load shapes (LSs) have been calculated for the different building types based on the monitored data. This report compares the monitored buildings` EUIs and LSs to EUIs and LSs determined using whole-building load data and the End-Use Disaggregation Algorithm (EDA). Two sets of EDA determined EUIs and LSs are compared to the monitored data values. The data sets represent: (1) average buildings in the SCE service territory and (2) specific buildings that were monitored.
Fast algorithm for relaxation processes in big-data systems
NASA Astrophysics Data System (ADS)
Hwang, S.; Lee, D.-S.; Kahng, B.
2014-10-01
Relaxation processes driven by a Laplacian matrix can be found in many real-world big-data systems, for example, in search engines on the World Wide Web and the dynamic load-balancing protocols in mesh networks. To numerically implement such processes, a fast-running algorithm for the calculation of the pseudoinverse of the Laplacian matrix is essential. Here we propose an algorithm which computes quickly and efficiently the pseudoinverse of Markov chain generator matrices satisfying the detailed-balance condition, a general class of matrices including the Laplacian. The algorithm utilizes the renormalization of the Gaussian integral. In addition to its applicability to a wide range of problems, the algorithm outperforms other algorithms in its ability to compute within a manageable computing time arbitrary elements of the pseudoinverse of a matrix of size millions by millions. Therefore our algorithm can be used very widely in analyzing the relaxation processes occurring on large-scale networked systems.
... version of this page please turn Javascript on. Balance Problems About Balance Problems Have you ever felt dizzy, lightheaded, or ... dizziness problem during the past year. Why Good Balance is Important Having good balance means being able ...
NASA Astrophysics Data System (ADS)
Mahto, Tarkeshwar; Mukherjee, V.
2016-09-01
In the present work, a two-area thermal-hybrid interconnected power system, consisting of a thermal unit in one area and a hybrid wind-diesel unit in other area is considered. Capacitive energy storage (CES) and CES with static synchronous series compensator (SSSC) are connected to the studied two-area model to compensate for varying load demand, intermittent output power and area frequency oscillation. A novel quasi-opposition harmony search (QOHS) algorithm is proposed and applied to tune the various tunable parameters of the studied power system model. Simulation study reveals that inclusion of CES unit in both the areas yields superb damping performance for frequency and tie-line power deviation. From the simulation results it is further revealed that inclusion of SSSC is not viable from both technical as well as economical point of view as no considerable improvement in transient performance is noted with its inclusion in the tie-line of the studied power system model. The results presented in this paper demonstrate the potential of the proposed QOHS algorithm and show its effectiveness and robustness for solving frequency and power drift problems of the studied power systems. Binary coded genetic algorithm is taken for sake of comparison.
CAST: Contraction Algorithm for Symmetric Tensors
Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-09-22
Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.
NASA Astrophysics Data System (ADS)
Tarroja, Brian; Eichman, Joshua D.; Zhang, Li; Brown, Tim M.; Samuelsen, Scott
2015-03-01
A study has been performed that analyzes the effectiveness of utilizing plug-in vehicles to meet holistic environmental goals across the combined electricity and transportation sectors. In this study, plug-in hybrid electric vehicle (PHEV) penetration levels are varied from 0 to 60% and base renewable penetration levels are varied from 10 to 63%. The first part focused on the effect of installing plug-in hybrid electric vehicles on the environmental performance of the combined electricity and transportation sectors. The second part addresses impacts on the design and operation of load-balancing resources on the electric grid associated with fleet capacity factor, peaking and load-following generator capacity, efficiency, ramp rates, start-up events and the levelized cost of electricity. PHEVs using smart charging are found to counteract many of the disruptive impacts of intermittent renewable power on balancing generators for a wide range of renewable penetration levels, only becoming limited at high renewable penetration levels due to lack of flexibility and finite load size. This study highlights synergy between sustainability measures in the electric and transportation sectors and the importance of communicative dispatch of these vehicles.
Irwin, John A.
1979-01-01
A gas turbine engine has an internal drive shaft including one end connected to a driven load and an opposite end connected to a turbine wheel and wherein the shaft has an in situ adjustable balance system near the critical center of a bearing span for the shaft including two 360.degree. rings piloted on the outer diameter of the shaft at a point accessible through an internal engine panel; each of the rings has a small amount of material removed from its periphery whereby both of the rings are precisely unbalanced an equivalent amount; the rings are locked circumferentially together by radial serrations thereon; numbered tangs on the outside diameter of each ring identify the circumferential location of unbalance once the rings are locked together; an aft ring of the pair of rings has a spline on its inside diameter that mates with a like spline on the shaft to lock the entire assembly together.
Reddy, P.B.; Jahns, T.M.
2007-04-30
Surface permanent magnet (SPM) synchronous machines using fractional-slot concentrated windings are being investigated as candidates for high-performance traction machines for automotive electric propulsion systems. It has been shown analytically and experimentally that such designs can achieve very wide constant-power speed ratios (CPSR) [1,2]. This work has shown that machines of this type are capable of achieving very low cogging torque amplitudes as well as significantly increasing the machine power density [3-5] compared to SPM machines using conventional distributed windings. High efficiency can be achieved in this class of SPM machine by making special efforts to suppress the eddy-current losses in the magnets [6-8], accompanied by efforts to minimize the iron losses in the rotor and stator cores. Considerable attention has traditionally been devoted to maximizing the full-load efficiency of traction machines at their rated operating points and along their maximum-power vs. speed envelopes for higher speeds [9,10]. For example, on-line control approaches have been presented for maximizing the full-load efficiency of PM synchronous machines, including the use of negative d-axis stator current to reduce the core losses [11,12]. However, another important performance specification for electric traction applications is the machine's efficiency at partial loads. Partial-load efficiency is particularly important if the target traction application requires long periods of cruising operation at light loads that are significantly lower than the maximum drive capabilities. While the design of the machine itself is clearly important, investigation has shown that this is a case where the choice of the control algorithm plays a critical role in determining the maximum partial-load efficiency that the machine actually achieves in the traction drive system. There is no evidence that this important topic has been addressed for this type of SPM machine by any other authors
McKeever, John W; Reddy, Patel; Jahns, Thomas M
2007-05-01
Surface permanent magnet (SPM) synchronous machines using fractional-slot concentrated windings are being investigated as candidates for high-performance traction machines for automotive electric propulsion systems. It has been shown analytically and experimentally that such designs can achieve very wide constant-power speed ratios (CPSR) [1,2]. This work has shown that machines of this type are capable of achieving very low cogging torque amplitudes as well as significantly increasing the machine power density [3-5] compared to SPM machines using conventional distributed windings. High efficiency can be achieved in this class of SPM machine by making special efforts to suppress the eddy-current losses in the magnets [6-8], accompanied by efforts to minimize the iron losses in the rotor and stator cores. Considerable attention has traditionally been devoted to maximizing the full-load efficiency of traction machines at their rated operating points and along their maximum-power vs. speed envelopes for higher speeds [9,10]. For example, on-line control approaches have been presented for maximizing the full-load efficiency of PM synchronous machines, including the use of negative d-axis stator current to reduce the core losses [11,12]. However, another important performance specification for electric traction applications is the machine's efficiency at partial loads. Partial-load efficiency is particularly important if the target traction application requires long periods of cruising operation at light loads that are significantly lower than the maximum drive capabilities. While the design of the machine itself is clearly important, investigation has shown that this is a case where the choice of the control algorithm plays a critical role in determining the maximum partial-load efficiency that the machine actually achieves in the traction drive system. There is no evidence that this important topic has been addressed for this type of SPM machine by any other authors
Ghaedi, M; Azad, F Nasiri; Dashtian, K; Hajati, S; Goudarzi, A; Soylak, M
2016-10-05
Maximum malachite green (MG) adsorption onto ZnO Nanorod-loaded activated carbon (ZnO-NR-AC) was achieved following the optimization of conditions, while the mass transfer was accelerated by ultrasonic. The central composite design (CCD) and genetic algorithm (GA) were used to estimate the effect of individual variables and their mutual interactions on the MG adsorption as response and to optimize the adsorption process. The ZnO-NR-AC surface morphology and its properties were identified via FESEM, XRD and FTIR. The adsorption equilibrium isotherm and kinetic models investigation revealed the well fit of the experimental data to Langmuir isotherm and pseudo-second-order kinetic model, respectively. It was shown that a small amount of ZnO-NR-AC (with adsorption capacity of 20mgg(-1)) is sufficient for the rapid removal of high amount of MG dye in short time (3.99min).
NASA Astrophysics Data System (ADS)
Ghaedi, M.; Azad, F. Nasiri; Dashtian, K.; Hajati, S.; Goudarzi, A.; Soylak, M.
2016-10-01
Maximum malachite green (MG) adsorption onto ZnO Nanorod-loaded activated carbon (ZnO-NR-AC) was achieved following the optimization of conditions, while the mass transfer was accelerated by ultrasonic. The central composite design (CCD) and genetic algorithm (GA) were used to estimate the effect of individual variables and their mutual interactions on the MG adsorption as response and to optimize the adsorption process. The ZnO-NR-AC surface morphology and its properties were identified via FESEM, XRD and FTIR. The adsorption equilibrium isotherm and kinetic models investigation revealed the well fit of the experimental data to Langmuir isotherm and pseudo-second-order kinetic model, respectively. It was shown that a small amount of ZnO-NR-AC (with adsorption capacity of 20 mg g- 1) is sufficient for the rapid removal of high amount of MG dye in short time (3.99 min).
Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2015-01-01
An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
... often, it could be a sign of a balance problem. Balance problems can make you feel unsteady or as ... fall-related injuries, such as hip fracture. Some balance problems are due to problems in the inner ...
NASA Technical Reports Server (NTRS)
Simanonok, K. E.; Fortney, S. M.; Ford, S. R.; Charles, J. B.; Ward, D. F.
1994-01-01
Shuttle astronauts currently drink approximately a quart of water with eight salt tablets before reentry to restore lost body fluid and thereby reduce the likelihood of cardiovascular instability and syncope during reentry and after landing. However, the saline loading countermeasure is not entirely effective in restoring orthostatic tolerance to preflight levels. We tested the hypothesis that the effectiveness of this countermeasure could be improved with the use of a vasopressin analog, 1-deamino-8-D-arginine vasopressin (dDAVP). The rationale for this approach is that reducing urine formation with exogenous vasopressin should increase the magnitude and duration of the vascular volume expansion produced by the saline load, and in so doing improve orthostatic tolerance during reentry and postflight.
NASA Astrophysics Data System (ADS)
Magirl, C. S.; Czuba, J. A.; Czuba, C. R.; Curran, C. A.
2012-12-01
Despite heavy sediment loads, large winter floods, and floodplain development, the rivers draining Mount Rainier, a 4,392-m glaciated stratovolcano within 85 km of sea level at Puget Sound, Washington, support important populations of anadromous salmonids, including Chinook salmon and steelhead trout, both listed as threatened under the Endangered Species Act. Aggressive river-management approaches of the early 20th century, such as bank armoring and gravel dredging, are being replaced by more ecologically sensitive approaches including setback levees. However, ongoing aggradation rates of up to 8 cm/yr in lowland reaches present acute challenges for resource managers tasked with ensuring flood protection without deleterious impacts to aquatic ecology. Using historical sediment-load data and a recent reservoir survey of sediment accumulation, rivers draining Mount Rainer were found to carry total sediment yields of 350 to 2,000 tonnes/km2/yr, notably larger than sediment yields of 50 to 200 tonnes/km2/yr typical for other Cascade Range rivers. An estimated 70 to 94% of the total sediment load in lowland reaches originates from the volcano. Looking toward the future, transport-capacity analyses and sediment-transport modeling suggest that large increases in bedload and associated aggradation will result from modest increases in rainfall and runoff that are predicted under future climate conditions. If large sediment loads and associated aggradation continue, creative solutions and long-term management strategies are required to protect people and structures in the floodplain downstream of Mount Rainier while preserving aquatic ecosystems.
NASA Astrophysics Data System (ADS)
Eckert, C. H. J.; Zenker, E.; Bussmann, M.; Albach, D.
2016-10-01
We present an adaptive Monte Carlo algorithm for computing the amplified spontaneous emission (ASE) flux in laser gain media pumped by pulsed lasers. With the design of high power lasers in mind, which require large size gain media, we have developed the open source code HASEonGPU that is capable of utilizing multiple graphic processing units (GPUs). With HASEonGPU, time to solution is reduced to minutes on a medium size GPU cluster of 64 NVIDIA Tesla K20m GPUs and excellent speedup is achieved when scaling to multiple GPUs. Comparison of simulation results to measurements of ASE in Y b 3 + : Y AG ceramics show perfect agreement.
Performance Evaluation of an Option-Based Learning Algorithm in Multi-Car Elevator Systems
NASA Astrophysics Data System (ADS)
Valdivielso Chian, Alex; Miyamoto, Toshiyuki
In this letter, we present the evaluation of an option-based learning algorithm, developed to perform a conflict-free allocation of calls among cars in a multi-car elevator system. We evaluate its performance in terms of the service time, its flexibility in the task-allocation, and the load balancing.
Technology Transfer Automated Retrieval System (TEKTRAN)
Reliable estimation of the surface energy balance from local to regional scales is crucial for many applications including weather forecasting, hydrologic modeling, irrigation scheduling, water resource management, and climate change research, just to name a few. Numerous models have been developed ...
NASA Technical Reports Server (NTRS)
Srinivasan, R. S.; Simanonok, K. E.; Charles, J. B.
1994-01-01
Fluid loading (FL) before Shuttle reentry is a countermeasure currently in use by NASA to improve the orthostatic tolerance of astronauts during reentry and postflight. The fluid load consists of water and salt tablets equivalent to 32 oz (946 ml) of isotonic saline. However, the effectiveness of this countermeasure has been observed to decrease with the duration of spaceflight. The countermeasure's effectiveness may be improved by enhancing fluid retention using analogs of vasopressin such as lypressin (LVP) and desmopressin (dDAVP). In a computer simulation study reported previously, we attempted to assess the improvement in fluid retention obtained by the use of LVP administered before FL. The present study is concerned with the use of dDAVP. In a recent 24-hour, 6 degree head-down tilt (HDT) study involving seven men, dDAVP was found to improve orthostatic tolerance as assessed by both lower body negative pressure (LBNP) and stand tests. The treatment restored Luft's cumulative stress index (cumulative product of magnitude and duration of LBNP) to nearly pre-bedrest level. The heart rate was lower and stroke volume was marginally higher at the same LBNP levels with administration of dDAVP compared to placebo. Lower heart rates were also observed with dDAVP during stand test, despite the lower level of cardiovascular stress. These improvements were seen with only a small but significant increase in plasma volume of approximately 3 percent. This paper presents a computer simulation analysis of some of the results of this HDT study.
Resource Balancing Control Allocation
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Bodson, Marc
2010-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.
A Robust Load Shedding Strategy for Microgrid Islanding Transition
Liu, Guodong; Xiao, Bailu; Starke, Michael R; Ceylan, Oguzhan; Tomsovic, Kevin
2016-01-01
A microgrid is a group of interconnected loads and distributed energy resources. It can operate in either gridconnected mode to exchange energy with the main grid or run autonomously as an island in emergency mode. However, the transition of microgrid from grid-connected mode to islanded mode is usually associated with excessive load (or generation), which should be shed (or spilled). Under this condition, this paper proposes an robust load shedding strategy for microgrid islanding transition, which takes into account the uncertainties of renewable generation in the microgrid and guarantees the balance between load and generation after islanding. A robust optimization model is formulated to minimize the total operation cost, including fuel cost and penalty for load shedding. The proposed robust load shedding strategy works as a backup plan and updates at a prescribed interval. It assures a feasible operating point after islanding given the uncertainty of renewable generation. The proposed algorithm is demonstrated on a simulated microgrid consisting of a wind turbine, a PV panel, a battery, two distributed generators (DGs), a critical load and a interruptible load. Numerical simulation results validate the proposed algorithm.
Method and apparatus for calibrating multi-axis load cells in a dexterous robot
NASA Technical Reports Server (NTRS)
Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)
2012-01-01
A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.
Bankruptcy Problem Approach to Load-Shedding in Agent-Based Microgrid Operation
NASA Astrophysics Data System (ADS)
Kim, Hak-Man; Kinoshita, Tetsuo; Lim, Yujin; Kim, Tai-Hoon
Research, development, and demonstration projects on microgrids have been progressed in many countries. Furthermore, microgrids are expected to introduce into power grids as eco-friendly small-scale power grids in the near future. Load-shedding is a problem not avoided to meet power balance between power supply and power demand to maintain specific frequency such as 50 Hz or 60 Hz. Load-shedding causes consumers inconvenience and therefore should be performed minimally. Recently, agent-based microgrid operation has been studied and new algorithms for their autonomous operation including load-shedding has been required. The bankruptcy problem deals with distribution insufficient sources to claimants. In this paper, we approach the load-shedding problem as a bankruptcy problem and adopt the Talmud rule as an algorithm. Load-shedding using the Talmud rule is tested in islanded microgrid operation based on a multiagent system.
Design and Stability of Load-Side Primary Frequency Control in Power Systems
Zhao, CH; Topcu, U; Li, N; Low, S
2014-05-01
We present a systematic method to design ubiquitous continuous fast-acting distributed load control for primary frequency regulation in power networks, by formulating an optimal load control (OLC) problem where the objective is to minimize the aggregate cost of tracking an operating point subject to power balance over the network. We prove that the swing dynamics and the branch power flows, coupled with frequency-based load control, serve as a distributed primal-dual algorithm to solve OLC. We establish the global asymptotic stability of a multimachine network under such type of load-side primary frequency control. These results imply that the local frequency deviations on each bus convey exactly the right information about the global power imbalance for the loads to make individual decisions that turn out to be globally optimal. Simulations confirm that the proposed algorithm can rebalance power and resynchronize bus frequencies after a disturbance with significantly improved transient performance.
Ghaedi, M; Shojaeipour, E; Ghaedi, A M; Sahraei, Reza
2015-05-05
In this study, copper nanowires loaded on activated carbon (Cu-NWs-AC) was used as novel efficient adsorbent for the removal of malachite green (MG) from aqueous solution. This new material was synthesized through simple protocol and its surface properties such as surface area, pore volume and functional groups were characterized with different techniques such XRD, BET and FESEM analysis. The relation between removal percentages with variables such as solution pH, adsorbent dosage (0.005, 0.01, 0.015, 0.02 and 0.1g), contact time (1-40min) and initial MG concentration (5, 10, 20, 70 and 100mg/L) was investigated and optimized. A three-layer artificial neural network (ANN) model was utilized to predict the malachite green dye removal (%) by Cu-NWs-AC following conduction of 248 experiments. When the training of the ANN was performed, the parameters of ANN model were as follows: linear transfer function (purelin) at output layer, Levenberg-Marquardt algorithm (LMA), and a tangent sigmoid transfer function (tansig) at the hidden layer with 11 neurons. The minimum mean squared error (MSE) of 0.0017 and coefficient of determination (R(2)) of 0.9658 were found for prediction and modeling of dye removal using testing data set. A good agreement between experimental data and predicted data using the ANN model was obtained. Fitting the experimental data on previously optimized condition confirm the suitability of Langmuir isotherm models for their explanation with maximum adsorption capacity of 434.8mg/g at 25°C. Kinetic studies at various adsorbent mass and initial MG concentration show that the MG maximum removal percentage was achieved within 20min. The adsorption of MG follows the pseudo-second-order with a combination of intraparticle diffusion model.
NASA Astrophysics Data System (ADS)
Ghaedi, M.; Shojaeipour, E.; Ghaedi, A. M.; Sahraei, Reza
2015-05-01
In this study, copper nanowires loaded on activated carbon (Cu-NWs-AC) was used as novel efficient adsorbent for the removal of malachite green (MG) from aqueous solution. This new material was synthesized through simple protocol and its surface properties such as surface area, pore volume and functional groups were characterized with different techniques such XRD, BET and FESEM analysis. The relation between removal percentages with variables such as solution pH, adsorbent dosage (0.005, 0.01, 0.015, 0.02 and 0.1 g), contact time (1-40 min) and initial MG concentration (5, 10, 20, 70 and 100 mg/L) was investigated and optimized. A three-layer artificial neural network (ANN) model was utilized to predict the malachite green dye removal (%) by Cu-NWs-AC following conduction of 248 experiments. When the training of the ANN was performed, the parameters of ANN model were as follows: linear transfer function (purelin) at output layer, Levenberg-Marquardt algorithm (LMA), and a tangent sigmoid transfer function (tansig) at the hidden layer with 11 neurons. The minimum mean squared error (MSE) of 0.0017 and coefficient of determination (R2) of 0.9658 were found for prediction and modeling of dye removal using testing data set. A good agreement between experimental data and predicted data using the ANN model was obtained. Fitting the experimental data on previously optimized condition confirm the suitability of Langmuir isotherm models for their explanation with maximum adsorption capacity of 434.8 mg/g at 25 °C. Kinetic studies at various adsorbent mass and initial MG concentration show that the MG maximum removal percentage was achieved within 20 min. The adsorption of MG follows the pseudo-second-order with a combination of intraparticle diffusion model.
Woodruff, S.B.
1992-01-01
The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Erlebacher, G.
1994-01-01
While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.
Evaluating the Impact of Solar Generation on Balancing Requirements in Southern Nevada System
Ma, Jian; Lu, Shuai; Etingov, Pavel V.; Makarov, Yuri V.
2012-07-26
Abstract—In this paper, the impacts of solar photovoltaic (PV) generation on balancing requirements including regulation and load following in the Southern Nevada balancing area are analyzed. The methodology is based on the “swinging door” algorithm and a probability box method developed by PNNL. The regulation and load following signals are mimicking the system’s scheduling and real-time dispatch processes. Load, solar PV generation and distributed PV generation (DG) data are used in the simulation. Different levels of solar PV generation and DG penetration profiles are used in the study. Sensitivity of the regulation requirements with respect to real-time solar PV generation forecast errors is analyzed.
NASA Astrophysics Data System (ADS)
Yin, Zhendong; Zong, Zhiyuan; Sun, Hongjian; Wu, Zhilu; Yang, Zhutian
2012-12-01
In this article, an efficient multiuser detector based on the artificial fish swarm algorithm (AFSA-MUD) is proposed and investigated for direct-sequence ultrawideband systems under different channels: the additive white Gaussian noise channel and the IEEE 802.15.3a multipath channel. From the literature review, the issues that the computational complexity of classical optimum multiuser detection (OMD) rises exponentially with the number of users and the bit error rate (BER) performance of other sub-optimal multiuser detectors is not satisfactory, still need to be solved. This proposed method can make a good tradeoff between complexity and performance through the various behaviors of artificial fishes in the simplified Euclidean solution space, which is constructed by the solutions of some sub-optimal multiuser detectors. Here, these sub-optimal detectors are minimum mean square error detector, decorrelating detector, and successive interference cancellation detector. As a result of this novel scheme, the convergence speed of AFSA-MUD is greatly accelerated and the number of iterations is also significantly reduced. The experimental results demonstrate that the BER performance and the near-far effect resistance of this proposed algorithm are quite close to those of OMD, while its computational complexity is much lower than the traditional OMD. Moreover, as the number of active users increases, the BER performance of AFSA-MUD is almost the same as that of OMD.
NASA Astrophysics Data System (ADS)
1981-01-01
Mechanical Technology, Incorporated developed a fully automatic laser machining process that allows more precise balancing removes metal faster, eliminates excess metal removal and other operator induced inaccuracies, and provides significant reduction in balancing time. Manufacturing costs are reduced as a result.
Baby Carriage: Infants Walking with Loads
ERIC Educational Resources Information Center
Garciaguirre, Jessie S.; Adolph, Karen E.; Shrout, Patrick E.
2007-01-01
Maintaining balance is a central problem for new walkers. To examine how infants cope with the additional balance control problems induced by load carriage, 14-month-olds were loaded with 15% of their body weight in shoulder-packs. Both symmetrical and asymmetrical loads disrupted alternating gait patterns and caused less mature footfall patterns.…
Frequency effects on the stability of a journal bearing for periodic loading
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Brewe, D. E.
1991-01-01
The stability of a journal bearing is numerically predicted when a unidirectional periodic external load is applied. The analysis is performed using a cavitation algorithm, which mimics the Jakobsson-Floberg and Olsson (JFO) theory by accounting for the mass balance through the complete bearing. Hence, the history of the film is taken into consideration. The loading pattern is taken to be sinusoidal and the frequency of the load cycle is varied. The results are compared with the predictions using Reynolds boundary conditions for both film rupture and reformation. With such comparisons, the need for accurately predicting the cavitation regions for complex loading patterns is clearly demonstrated. For a particular frequency of loading, the effects of mass, amplitude of load variation and frequency of journal speed are also investigated. The journal trajectories, transient variations in fluid film forces, net surface velocity and minimum film thickness, and pressure profiles are also presented.
Energy Aware Clustering Algorithms for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Rakhshan, Noushin; Rafsanjani, Marjan Kuchaki; Liu, Chenglian
2011-09-01
The sensor nodes deployed in wireless sensor networks (WSNs) are extremely power constrained, so maximizing the lifetime of the entire networks is mainly considered in the design. In wireless sensor networks, hierarchical network structures have the advantage of providing scalable and energy efficient solutions. In this paper, we investigate different clustering algorithms for WSNs and also compare these clustering algorithms based on metrics such as clustering distribution, cluster's load balancing, Cluster Head's (CH) selection strategy, CH's role rotation, node mobility, clusters overlapping, intra-cluster communications, reliability, security and location awareness.
NASA Astrophysics Data System (ADS)
Kong, Xiangxi; Zhang, Xueliang; Chen, Xiaozhe; Wen, Bangchun; Wang, Bo
2016-05-01
In this paper, phase and speed synchronization control of four eccentric rotors (ERs) driven by induction motors in a linear vibratory feeder with unknown time-varying load torques is studied. Firstly, the electromechanical coupling model of the linear vibratory feeder is established by associating induction motor's model with the dynamic model of the system, which is a typical under actuated model. According to the characteristics of the linear vibratory feeder, the complex control problem of the under actuated electromechanical coupling model converts to phase and speed synchronization control of four ERs. In order to keep the four ERs operating synchronously with zero phase differences, phase and speed synchronization controllers are designed by employing adaptive sliding mode control (ASMC) algorithm via a modified master-slave structure. The stability of the controllers is proved by Lyapunov stability theorem. The proposed controllers are verified by simulation via Matlab/Simulink program and compared with the conventional sliding mode control (SMC) algorithm. The results show the proposed controllers can reject the time-varying load torques effectively and four ERs can operate synchronously with zero phase differences. Moreover, the control performance is better than the conventional SMC algorithm and the chattering phenomenon is attenuated. Furthermore, the effects of reference speed and parametric perturbations are discussed to show the strong robustness of the proposed controllers. Finally, experiments on a simple vibratory test bench are operated by using the proposed controllers and without control, respectively, to validate the effectiveness of the proposed controllers further.
... a new type of balance therapy using computerized, virtual reality. UPMC associate professor Susan Whitney, Ph.D., ... involves simulated trips down the aisles of a virtual grocery store in the university's Medical Virtual Reality ...
Parallel algorithms for semi-Lagrangian transport in global atmospheric circulation models
Drake, J.B.; Worley, P.H.; Michalakes, J.; Foster, I.T.
1995-02-01
Global atmospheric circulation models (GCM) typically have three primary algorithmic components: columnar physics, spectral transform, and semi-Lagrangian transport. In developing parallel implementations, these three components are equally important and can be examined somewhat independently. A two-dimensional horizontal data decomposition of the three-dimensional computational grid leaves all physics computations on processor, and the only efficiency issues arise in load balancing. A recently completed study by the authors of different approaches to parallelizing the spectral transform showed several viable algorithms. Preliminary results of an analogous study of algorithmic alternatives for parallel semi-Lagrangian transport are described here.
14 CFR 23.423 - Maneuvering loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Horizontal Stabilizing and Balancing Surfaces § 23.423 Maneuvering loads. Each horizontal surface and its supporting structure, and...
14 CFR 23.423 - Maneuvering loads.
Code of Federal Regulations, 2012 CFR
2012-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Horizontal Stabilizing and Balancing Surfaces § 23.423 Maneuvering loads. Each horizontal surface and its supporting structure, and...
14 CFR 23.423 - Maneuvering loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Horizontal Stabilizing and Balancing Surfaces § 23.423 Maneuvering loads. Each horizontal surface and its supporting structure, and...
14 CFR 23.423 - Maneuvering loads.
Code of Federal Regulations, 2013 CFR
2013-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Horizontal Stabilizing and Balancing Surfaces § 23.423 Maneuvering loads. Each horizontal surface and its supporting structure, and...
14 CFR 23.423 - Maneuvering loads.
Code of Federal Regulations, 2014 CFR
2014-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Horizontal Stabilizing and Balancing Surfaces § 23.423 Maneuvering loads. Each horizontal surface and its supporting structure, and...
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Yu, Zhenhua; Fu, Xiao; Cai, Yuanli; Vuran, Mehmet C
2011-01-01
A reliable energy-efficient multi-level routing algorithm in wireless sensor networks is proposed. The proposed algorithm considers the residual energy, number of the neighbors and centrality of each node for cluster formation, which is critical for well-balanced energy dissipation of the network. In the algorithm, a knowledge-based inference approach using fuzzy Petri nets is employed to select cluster heads, and then the fuzzy reasoning mechanism is used to compute the degree of reliability in the route sprouting tree from cluster heads to the base station. Finally, the most reliable route among the cluster heads can be constructed. The algorithm not only balances the energy load of each node but also provides global reliability for the whole network. Simulation results demonstrate that the proposed algorithm effectively prolongs the network lifetime and reduces the energy consumption.
David Chassin, Pavel Etingov
2013-04-30
The LMDT software automates the process of the load composite model data preparation in the format supported by the major power system software vendors (GE and Siemens). Proper representation of the load composite model in power system dynamic analysis is very important. Software tools for power system simulation like GE PSLF and Siemens PSSE already include algorithms for the load composite modeling. However, these tools require that the input information on composite load to be provided in custom formats. Preparation of this data is time consuming and requires multiple manual operations. The LMDT software enables to automate this process. Software is designed to generate composite load model data. It uses the default load composition data, motor information, and bus information as an input. Software processes the input information and produces load composition model. Generated model can be stored in .dyd format supported by GE PSLF package or .dyr format supported by Siemens PSSE package.
NASA Technical Reports Server (NTRS)
1988-01-01
TherEx Inc.'s AT-1 Computerized Ataxiameter precisely evaluates posture and balance disturbances that commonly accompany neurological and musculoskeletal disorders. Complete system includes two-strain gauged footplates, signal conditioning circuitry, a computer monitor, printer and a stand-alone tiltable balance platform. AT-1 serves as assessment tool, treatment monitor, and rehabilitation training device. It allows clinician to document quantitatively the outcome of treatment and analyze data over time to develop outcome standards for several classifications of patients. It can evaluate specifically the effects of surgery, drug treatment, physical therapy or prosthetic devices.
Structural Integrity of a Wind Tunnel Balance
NASA Technical Reports Server (NTRS)
Karkehabadi, R.; Rhew, R. D.
2004-01-01
The National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) has been designing strain-gage balances for utilization in wind tunnels since its inception. The utilization of balances span over a wide variety of aerodynamic tests. A force balance is an inherently critically stressed component due to the requirements of measurement sensitivity. Research and analyses are done in order to investigate the structural integrity of the balances as well as developing an understanding of their performance in order to enhance their capability. Maximum loading occurs when all 6 components of the loads are applied simultaneously with their maximum value allowed (limit load). This circumstance normally does not occur in the wind tunnel. However, if it occurs, is the balance capable of handling the loads with an acceptable factor of safety? LaRC Balance 1621 was modeled and meshed in PATRAN for analysis in NASTRAN. For a complete analysis, it is necessary to consider all the load cases as well as use dense mesh near all the edges. Because of computer limitations, it is not possible to have one model with the dense mesh near all edges. In the present study, a dense mesh is limited to the surface corners where the cage and axial sections meet. Four different load combinations are used for the current analysis. Linear analysis is performed for each load case. In the case where the stress value is above linear elastic region, it is necessary to perform nonlinear analysis. It is also important to investigate the variables limiting the structural integrity of the balances. In order to investigate the possibility of modifying the existing balances to enhance the structural integrity, some modifications are done on this balance. The structural integrity of the balance after modification is investigated.
ERIC Educational Resources Information Center
Mills, Allan
2014-01-01
Theory predicts that an egg-shaped body should rest in stable equilibrium when on its side, balance vertically in metastable equilibrium on its broad end and be completely unstable on its narrow end. A homogeneous solid egg made from wood, clay or plastic behaves in this way, but a real egg will not stand on either end. It is shown that this…
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; ...
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficient implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.
Balance (or Vestibular) Rehabilitation
... the Public / Hearing and Balance Balance (or Vestibular) Rehabilitation Audiologic (hearing), balance, and medical diagnostic tests help ... whether you are a candidate for vestibular (balance) rehabilitation. Vestibular rehabilitation is an individualized balance retraining exercise ...
Elastomeric load sharing device
NASA Technical Reports Server (NTRS)
Isabelle, Charles J. (Inventor); Kish, Jules G. (Inventor); Stone, Robert A. (Inventor)
1992-01-01
An elastomeric load sharing device, interposed in combination between a driven gear and a central drive shaft to facilitate balanced torque distribution in split power transmission systems, includes a cylindrical elastomeric bearing and a plurality of elastomeric bearing pads. The elastomeric bearing and bearing pads comprise one or more layers, each layer including an elastomer having a metal backing strip secured thereto. The elastomeric bearing is configured to have a high radial stiffness and a low torsional stiffness and is operative to radially center the driven gear and to minimize torque transfer through the elastomeric bearing. The bearing pads are configured to have a low radial and torsional stiffness and a high axial stiffness and are operative to compressively transmit torque from the driven gear to the drive shaft. The elastomeric load sharing device has spring rates that compensate for mechanical deviations in the gear train assembly to provide balanced torque distribution between complementary load paths of split power transmission systems.
Microprocessor-Controlled Laser Balancing System
NASA Technical Reports Server (NTRS)
Demuth, R. S.
1985-01-01
Material removed by laser action as part tested for balance. Directed by microprocessor, laser fires appropriate amount of pulses in correct locations to remove necessary amount of material. Operator and microprocessor software interact through video screen and keypad; no programing skills or unprompted system-control decisions required. System provides complete and accurate balancing in single load-and-spinup cycle.
NASA Technical Reports Server (NTRS)
Duval, R. W.; Bahrami, M.
1985-01-01
The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.
NASA Astrophysics Data System (ADS)
Dilek, Murat
Distribution system analysis and design has experienced a gradual development over the past three decades. The once loosely assembled and largely ad hoc procedures have been progressing toward being well-organized. The increasing power of computers now allows for managing the large volumes of data and other obstacles inherent to distribution system studies. A variety of sophisticated optimization methods, which were impossible to conduct in the past, have been developed and successfully applied to distribution systems. Among the many procedures that deal with making decisions about the state and better operation of a distribution system, two decision support procedures will be addressed in this study: phase balancing and phase prediction. The former recommends re-phasing of single- and double-phase laterals in a radial distribution system in order to improve circuit loss while also maintaining/improving imbalances at various balance point locations. Phase balancing calculations are based on circuit loss information and current magnitudes that are calculated from a power flow solution. The phase balancing algorithm is designed to handle time-varying loads when evaluating phase moves that will result in improved circuit losses over all load points. Applied to radial distribution systems, the phase prediction algorithm attempts to predict the phases of single- and/or double phase laterals that have no phasing information previously recorded by the electric utility. In such an attempt, it uses available customer data and kW/kVar measurements taken at various locations in the system. It is shown that phase balancing is a special case of phase prediction. Building on the phase balancing and phase prediction design studies, this work introduces the concept of integrated design, an approach for coordinating the effects of various design calculations. Integrated design considers using results of multiple design applications rather than employing a single application for a
NASA Technical Reports Server (NTRS)
Horne, Warren L. (Inventor); Kunz, Nans (Inventor); Luna, Phillip M. (Inventor); Roberts, Andrew C. (Inventor); Smith, Kenneth M. (Inventor); Smith, Ronald C. (Inventor)
1989-01-01
A flow-through balance is provided which includes a non-metric portion and a metric portion which form a fluid-conducting passage in fluid communication with an internal bore in the sting. The non-metric and metric portions of the balance are integrally connected together by a plurality of flexure beams such that the non-metric portion, the metric portion and the flexure beams form a one-piece construction which eliminates mechanical hysteresis between the non-metric and the metric portion. The system includes structures for preventing the effects of temperature, pressure and pressurized fluid from producing asymmetric loads on the flexure beams. A temperature sensor and a pressure sensor are located within the fluid-conducting passage of the balance. The system includes a longitudinal bellows member connected at two ends to one of the non-metric portion and the metric portion and at an intermediate portion thereof to the other of (1) and (2). A plurality of strain gages are mounted on the flexure beams to measure strain forces on the flexure beams. The flexure beams are disposed so as to enable symmetric forces on the flexure beams to cancel out so that only asymmetric forces are measured as deviations by the strain gages.
Spletzer, Barry L.
1998-01-01
A load cell combines the outputs of a plurality of strain gauges to measure components of an applied load. Combination of strain gauge outputs allows measurement of any of six load components without requiring complex machining or mechanical linkages to isolate load components. An example six axis load cell produces six independent analog outputs, each directly proportional to one of the six general load components.
Spletzer, B.L.
1998-12-15
A load cell combines the outputs of a plurality of strain gauges to measure components of an applied load. Combination of strain gauge outputs allows measurement of any of six load components without requiring complex machining or mechanical linkages to isolate load components. An example six axis load cell produces six independent analog outputs, each directly proportional to one of the six general load components. 16 figs.
Spletzer, Barry L.
2001-01-01
A load cell combines the outputs of a plurality of strain gauges to measure components of an applied load. Combination of strain gauge outputs allows measurement of any of six load components without requiring complex machining or mechanical linkages to isolate load components. An example six axis load cell produces six independent analog outputs which can be combined to determine any one of the six general load components.
Grande, J A; Andújar, J M; Aroba, J; de la Torre, M L; Beltrán, R
2005-04-01
In the present work, Acid Mine Drainage (AMD) processes in the Chorrito Stream, which flows into the Cobica River (Iberian Pyrite Belt, Southwest Spain) are characterized by means of clustering techniques based on fuzzy logic. Also, pH behavior in contrast to precipitation is clearly explained, proving that the influence of rainfall inputs on the acidity and, as a result, on the metal load of a riverbed undergoing AMD processes highly depends on the moment when it occurs. In general, the riverbed dynamic behavior is the response to the sum of instant stimuli produced by isolated rainfall, the seasonal memory depending on the moment of the target hydrological year and, finally, the own inertia of the river basin, as a result of an accumulation process caused by age-long mining activity.
Static calibration of the RSRA active-isolator rotor balance system
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
1987-01-01
The Rotor Systems Research Aircraft (RSRA) active-isolator system is designed to reduce rotor vibrations transmitted to the airframe and to simultaneously measure all six forces and moments generated by the rotor. These loads are measured by using a combination of load cells, strain gages, and hydropneumatic active isolators with built-in pressure gages. The first static calibration of the complete active-isolator rotor balance system was performed in l983 to verify its load-measurement capabilities. Analysis of the data included the use of multiple linear regressions to determine calibration matrices for different data sets and a hysteresis-removal algorithm to estimate in-flight measurement errors. Results showed that the active-isolator system can fulfill most performance predictions. The results also suggested several possible improvements to the system.
Structural dynamics payload loads estimates
NASA Technical Reports Server (NTRS)
Engels, R. C.
1982-01-01
Methods for the prediction of loads on large space structures are discussed. Existing approaches to the problem of loads calculation are surveyed. A full scale version of an alternate numerical integration technique to solve the response part of a load cycle is presented, and a set of short cut versions of the algorithm developed. The implementation of these techniques using the software package developed is discussed.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
Markov chain Monte Carlo method without detailed balance.
Suwa, Hidemaro; Todo, Synge
2010-09-17
We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.
BAS: balanced acceptance sampling of natural resources.
Robertson, B L; Brown, J A; McDonald, T; Jaksons, P
2013-09-01
To design an efficient survey or monitoring program for a natural resource it is important to consider the spatial distribution of the resource. Generally, sample designs that are spatially balanced are more efficient than designs which are not. A spatially balanced design selects a sample that is evenly distributed over the extent of the resource. In this article we present a new spatially balanced design that can be used to select a sample from discrete and continuous populations in multi-dimensional space. The design, which we call balanced acceptance sampling, utilizes the Halton sequence to assure spatial diversity of selected locations. Targeted inclusion probabilities are achieved by acceptance sampling. The BAS design is conceptually simpler than competing spatially balanced designs, executes faster, and achieves better spatial balance as measured by a number of quantities. The algorithm has been programed in an R package freely available for download.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2012-01-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2011-12-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
Ocean Tide Loading Computation
NASA Technical Reports Server (NTRS)
Agnew, Duncan Carr
2005-01-01
September 15,2003 through May 15,2005 This grant funds the maintenance, updating, and distribution of programs for computing ocean tide loading, to enable the corrections for such loading to be more widely applied in space- geodetic and gravity measurements. These programs, developed under funding from the CDP and DOSE programs, incorporate the most recent global tidal models developed from Topex/Poscidon data, and also local tide models for regions around North America; the design of the algorithm and software makes it straightforward to combine local and global models.
Spectral element methods: Algorithms and architectures
NASA Technical Reports Server (NTRS)
Fischer, Paul; Ronquist, Einar M.; Dewey, Daniel; Patera, Anthony T.
1988-01-01
Spectral element methods are high-order weighted residual techniques for partial differential equations that combine the geometric flexibility of finite element methods with the rapid convergence of spectral techniques. Spectral element methods are described for the simulation of incompressible fluid flows, with special emphasis on implementation of spectral element techniques on medium-grained parallel processors. Two parallel architectures are considered: the first, a commercially available message-passing hypercube system; the second, a developmental reconfigurable architecture based on Geometry-Defining Processors. High parallel efficiency is obtained in hypercube spectral element computations, indicating that load balancing and communication issues can be successfully addressed by a high-order technique/medium-grained processor algorithm-architecture coupling.
... eNewsletters Calendar Balance Food and Activity What is Energy Balance? Energy is another word for "calories." Your ... adults, fewer calories are needed at older ages. Energy Balance in Real Life Think of it as ...
AUDIOLOGY Dizziness and Balance Inform ation Seri es Our balance system helps us walk, run, and move without falling. ... if I have a problem with balance or dizziness? It is important to see your doctor if ...
Automated load management for spacecraft power systems
NASA Technical Reports Server (NTRS)
Lollar, Louis F.
1987-01-01
An account is given of the results of a study undertaken by NASA's Marshall Space Flight Center to design and implement the load management techniques for autonomous spacecraft power systems, such as the Autonomously Managed Power System Test Facility. Attention is given to four load-management criteria, which encompass power bus balancing on multichannel power systems, energy balancing in such systems, power quality matching of loads to buses, and contingency load shedding/adding. Full implementation of these criteria calls for the addition of a second power channel.
Split torque transmission load sharing
NASA Technical Reports Server (NTRS)
Krantz, T. L.; Rashidi, M.; Kish, J. G.
1992-01-01
Split torque transmissions are attractive alternatives to conventional planetary designs for helicopter transmissions. The split torque designs can offer lighter weight and fewer parts but have not been used extensively for lack of experience, especially with obtaining proper load sharing. Two split torque designs that use different load sharing methods have been studied. Precise indexing and alignment of the geartrain to produce acceptable load sharing has been demonstrated. An elastomeric torque splitter that has large torsional compliance and damping produces even better load sharing while reducing dynamic transmission error and noise. However, the elastomeric torque splitter as now configured is not capable over the full range of operating conditions of a fielded system. A thrust balancing load sharing device was evaluated. Friction forces that oppose the motion of the balance mechanism are significant. A static analysis suggests increasing the helix angle of the input pinion of the thrust balancing design. Also, dynamic analysis of this design predicts good load sharing and significant torsional response to accumulative pitch errors of the gears.
Parallel Computing Strategies for Irregular Algorithms
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)
2002-01-01
Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.
Partial storage optimization and load control strategy of cloud data centers.
Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.
Automatic load sharing in inverter modules
NASA Technical Reports Server (NTRS)
Nagano, S.
1979-01-01
Active feedback loads transistor equally with little power loss. Circuit is suitable for balancing modular inverters in spacecraft, computer power supplies, solar-electric power generators, and electric vehicles. Current-balancing circuit senses differences between collector current for power transistor and average value of load currents for all power transistors. Principle is effective not only in fixed duty-cycle inverters but also in converters operating at variable duty cycles.
Parallel global optimization with the particle swarm algorithm.
Schutte, J F; Reinbolt, J A; Fregly, B J; Haftka, R T; George, A D
2004-12-07
Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima-large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available.
ERIC Educational Resources Information Center
White, Richard
2007-01-01
The review by Black and Wiliam of national systems makes clear the complexity of assessment, and identifies important issues. One of these is "balance": balance between local and central responsibilities, balance between the weights given to various purposes of schooling, balance between weights for various functions of assessment, and balance…
ERIC Educational Resources Information Center
Claxton, David B.; Troy, Maridy; Dupree, Sarah
2006-01-01
Most authorities consider balance to be a component of skill-related physical fitness. Balance, however, is directly related to health, especially for older adults. Falls are a leading cause of injury and death among the elderly. Improved balance can help reduce falls and contribute to older people remaining physically active. Balance is a…
Dynamic balance improvement program
NASA Technical Reports Server (NTRS)
Butner, M. F.
1983-01-01
The reduction of residual unbalance in the space shuttle main engine (SSME) high pressure turbopump rotors was addressed. Elastic rotor response to unbalance and balancing requirements, multiplane and in housing balancing, and balance related rotor design considerations were assessed. Recommendations are made for near term improvement of the SSME balancing and for future study and development efforts.
2-opt heuristic for the disassembly line balancing problem
NASA Astrophysics Data System (ADS)
McGovern, Seamus M.; Gupta, Surendra M.
2004-02-01
Disassembly activities are an important part of product recovery operations. The disassembly line is the best choice for automated disassembly of returned products. However, finding the optimal balance for a disassembly line is computationally intensive with exhaustive search quickly becoming prohibitively large. In this paper, a greedy algorithm is presented for obtaining optimal or near-optimal solutions to the disassembly line-balancing problem. The greedy algorithm is a first-fit decreasing algorithm further enhanced to preserve precedence relationships. The algorithm seeks to minimize the number of workstations while addressing hazardous and high demand components. A two optimal algorithm is then developed to balance the part removal sequence and attempt to further reduce the total number of workstations. Examples are considered to illustrate the methodology. The conclusions drawn from the study include the consistent generation of optimal or near-optimal solutions, the ability to preserve precedence, the speed of the algorithms and their practicality due to the ease of implementation.
Coupled cluster algorithms for networks of shared memory parallel processors
NASA Astrophysics Data System (ADS)
Bentz, Jonathan L.; Olson, Ryan M.; Gordon, Mark S.; Schmidt, Michael W.; Kendall, Ricky A.
2007-05-01
As the popularity of using SMP systems as the building blocks for high performance supercomputers increases, so too increases the need for applications that can utilize the multiple levels of parallelism available in clusters of SMPs. This paper presents a dual-layer distributed algorithm, using both shared-memory and distributed-memory techniques to parallelize a very important algorithm (often called the "gold standard") used in computational chemistry, the single and double excitation coupled cluster method with perturbative triples, i.e. CCSD(T). The algorithm is presented within the framework of the GAMESS [M.W. Schmidt, K.K. Baldridge, J.A. Boatz, S.T. Elbert, M.S. Gordon, J.J. Jensen, S. Koseki, N. Matsunaga, K.A. Nguyen, S. Su, T.L. Windus, M. Dupuis, J.A. Montgomery, General atomic and molecular electronic structure system, J. Comput. Chem. 14 (1993) 1347-1363]. (General Atomic and Molecular Electronic Structure System) program suite and the Distributed Data Interface [M.W. Schmidt, G.D. Fletcher, B.M. Bode, M.S. Gordon, The distributed data interface in GAMESS, Comput. Phys. Comm. 128 (2000) 190]. (DDI), however, the essential features of the algorithm (data distribution, load-balancing and communication overhead) can be applied to more general computational problems. Timing and performance data for our dual-level algorithm is presented on several large-scale clusters of SMPs.
NASA Technical Reports Server (NTRS)
Simkovich, A.; Baumann, Robert C.
1961-01-01
The Vanguard satellites and component parts were balanced within the specified limits by using a Gisholt Type-S balancer in combination with a portable International Research and Development vibration analyzer and filter, with low-frequency pickups. Equipment and procedures used for balancing are described; and the determination of residual imbalance is accomplished by two methods: calculation, and graphical interpretation. Between-the-bearings balancing is recommended for future balancing of payloads.
Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities
Knirsch, Fabian; Engel, Dominik; Frincu, Marc; Prasanna, Viktor
2015-02-17
The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.
NASA Astrophysics Data System (ADS)
Leduhovsky, G. V.; Zhukov, V. P.; Barochkin, E. V.; Zimin, A. P.; Razinkov, A. A.
2015-08-01
The problem of striking material and energy balances from the data received by thermal power plant computerized automation systems from the technical accounting systems with the accuracy determined by the metrological characteristics of serviceable calibrated instruments is formulated using the mathematical apparatus of ridge regression method. A graph theory based matrix model of material and energy flows in systems having an intricate structure is proposed, using which it is possible to formalize the solution of a particular practical problem at the stage of constructing the system model. The problem of striking material and energy balances is formulated taking into account different degrees of trustworthiness with which the initial flow rates of coolants and their thermophysical parameters were determined, as well as process constraints expressed in terms of balance correlations on mass and energy for individual system nodes or for any combination thereof. Analytic and numerical solutions of the problem are proposed in different versions of its statement differing from each other in the adopted assumptions and considered constraints. It is shown how the procedure for striking material and energy balances from the results of measuring the flows of feed water and steam in the thermal process circuit of a combined heat and power plant affects the calculation accuracy of specific fuel rates for supplying heat and electricity. It has been revealed that the nominal values of indicators and the fuel saving or overexpenditure values associated with these indicators are the most dependent parameters. In calculating these quantities using different balance striking procedures, an error may arise the value of which is commensurable with the power plant thermal efficiency margin stipulated by the regulatory-technical documents on using fuel. The study results were used for substantiating the choice of stating the problem of striking material and fuel balances, as well as
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Gust loads. 23.425 Section 23.425 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS... Balancing Surfaces § 23.425 Gust loads. (a) Each horizontal surface, other than a main wing, must...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Balancing Surfaces § 23.425 Gust loads. (a) Each horizontal surface, other than a main wing, must be designed for loads resulting from— (1) Gust velocities specified in § 23.333(c) with flaps retracted; and... unaccelerated flight at the pertinent design speeds VF, VC, and VD must first be determined. The...
Weight/balance portable test equipment
Whitlock, R.W.
1994-11-03
This document shows the general layout, and gives a part description for the weight/balance test equipment. This equipment will aid in the regulation of the leachate loading of tanker trucks. The report contains four drawings with part specifications. The leachate originates from lined trenches.
Computer Applications in Balancing Chemical Equations.
ERIC Educational Resources Information Center
Kumar, David D.
2001-01-01
Discusses computer-based approaches to balancing chemical equations. Surveys 13 methods, 6 based on matrix, 2 interactive programs, 1 stand-alone system, 1 developed in algorithm in Basic, 1 based on design engineering, 1 written in HyperCard, and 1 prepared for the World Wide Web. (Contains 17 references.) (Author/YDS)
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Boman, Erik G.; Catalyurek, Umit V.; Chevalier, Cedric; Devine, Karen D.; Gebremedhin, Assefaw H.; Hovland, Paul D.; Pothen, Alex; Rajamanickam, Sivasankaran; Safro, Ilya; Wolf, Michael M.; Zhou, Min
2015-01-16
This final progress report summarizes the work accomplished at the Combinatorial Scientific Computing and Petascale Simulations Institute. We developed Zoltan, a parallel mesh partitioning library that made use of accurate hypergraph models to provide load balancing in mesh-based computations. We developed several graph coloring algorithms for computing Jacobian and Hessian matrices and organized them into a software package called ColPack. We developed parallel algorithms for graph coloring and graph matching problems, and also designed multi-scale graph algorithms. Three PhD students graduated, six more are continuing their PhD studies, and four postdoctoral scholars were advised. Six of these students and Fellows have joined DOE Labs (Sandia, Berkeley), as staff scientists or as postdoctoral scientists. We also organized the SIAM Workshop on Combinatorial Scientific Computing (CSC) in 2007, 2009, and 2011 to continue to foster the CSC community.
SSME alternate turbopump (pump section) axial load analysis
NASA Technical Reports Server (NTRS)
Crease, G. A.; Rosello, A., Jr.; Fetfatsidis, A. K.
1989-01-01
A flow balancing computer program constructed to calculate the axial loads on the Space Shuttle Main Engine (SSME) alternate turbopumps (ATs) pump sections are described. The loads are used in turn to determine load balancing piston design requirements. The application of the program to the inlet section, inducer/impeller/stage, bearings, seals, labyrinth, damper, piston, face and corner, and stationary/rotating surfaces is indicated. Design analysis results are reported which show that the balancing piston's designs are adequate and that performance and life will not be degraded by the turbopump's axial load characteristics.
Inducer Hydrodynamic Load Measurement Devices
NASA Technical Reports Server (NTRS)
Skelley, Stephen E.; Zoladz, Thomas F.
2002-01-01
Marshall Space Flight Center (MSFC) has demonstrated two measurement devices for sensing and resolving the hydrodynamic loads on fluid machinery. The first - a derivative of the six component wind tunnel balance - senses the forces and moments on the rotating device through a weakened shaft section instrumented with a series of strain gauges. This "rotating balance" was designed to directly measure the steady and unsteady hydrodynamic loads on an inducer, thereby defining both the amplitude and frequency content associated with operating in various cavitation modes. The second device - a high frequency response pressure transducer surface mounted on a rotating component - was merely an extension of existing technology for application in water. MSFC has recently completed experimental evaluations of both the rotating balance and surface-mount transducers in a water test loop. The measurement bandwidth of the rotating balance was severely limited by the relative flexibility of the device itself, resulting in an unexpectedly low structural bending mode and invalidating the higher frequency response data. Despite these limitations, measurements confirmed that the integrated loads on the four-bladed inducer respond to both cavitation intensity and cavitation phenomena. Likewise, the surface-mount pressure transducers were subjected to a range of temperatures and flow conditions in a non-rotating environment to record bias shifts and transfer functions between the transducers and a reference device. The pressure transducer static performance was within manufacturer's specifications and dynamic response accurately followed that of the reference.
Inducer Hydrodynamic Load Measurement Devices
NASA Technical Reports Server (NTRS)
Skelley, Stephen E.; Zoladz, Thomas F.; Turner, Jim (Technical Monitor)
2002-01-01
Marshall Space Flight Center (MSFC) has demonstrated two measurement devices for sensing and resolving the hydrodynamic loads on fluid machinery. The first - a derivative of the six-component wind tunnel balance - senses the forces and moments on the rotating device through a weakened shaft section instrumented with a series of strain gauges. This rotating balance was designed to directly measure the steady and unsteady hydrodynamic loads on an inducer, thereby defining both the amplitude and frequency content associated with operating in various cavitation modes. The second device - a high frequency response pressure transducer surface mounted on a rotating component - was merely an extension of existing technology for application in water. MSFC has recently completed experimental evaluations of both the rotating balance and surface-mount transducers in a water test loop. The measurement bandwidth of the rotating balance was severely limited by the relative flexibility of the device itself, resulting in an unexpectedly low structural bending mode and invalidating the higher-frequency response data. Despite these limitations, measurements confirmed that the integrated loads on the four-bladed inducer respond to both cavitation intensity and cavitation phenomena. Likewise, the surface-mount pressure transducers were subjected to a range of temperatures and flow conditions in a non-rotating environment to record bias shifts and transfer functions between the transducers and a reference device. The pressure transducer static performance was within manufacturer's specifications and dynamic response accurately followed that of the reference.
Occlusal cranial balancing technique.
Smith, Gerald H
2007-01-01
The acronym for Occlusal Cranial Balancing Technique is OCB. The OCB concept is based on the architectural principle of a level foundation. The principles of Occlusal Cranial Balancing are a monumental discovery and if applied will enhance total body function.
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
The water balance of the Skylab crew was analyzed. Evaporative water loss using a whole body input/output balance equation, water, body tissue, and energy balance was analyzed. The approach utilizes the results of several major Skylab medical experiments. Subsystems were designed for the use of the software necessary for the analysis. A partitional water balance that graphically depicts the changes due to water intake is presented. The energy balance analysis determines the net available energy to the individual crewman during any period. The balances produce a visual description of the total change of a particular body component during the course of the mission. The information is salvaged from metabolic balance data if certain techniques are used to reduce errors inherent in the balance method.
Polarization-balanced beamsplitter
Decker, Derek E.
1998-01-01
A beamsplitter assembly that includes several beamsplitter cubes arranged to define a plurality of polarization-balanced light paths. Each polarization-balanced light path contains one or more balanced pairs of light paths, where each balanced pair of light paths includes either two transmission light paths with orthogonal polarization effects or two reflection light paths with orthogonal polarization effects. The orthogonal pairing of said transmission and reflection light paths cancels polarization effects otherwise caused by beamsplitting.
Polarization-balanced beamsplitter
Decker, D.E.
1998-02-17
A beamsplitter assembly is disclosed that includes several beamsplitter cubes arranged to define a plurality of polarization-balanced light paths. Each polarization-balanced light path contains one or more balanced pairs of light paths, where each balanced pair of light paths includes either two transmission light paths with orthogonal polarization effects or two reflection light paths with orthogonal polarization effects. The orthogonal pairing of said transmission and reflection light paths cancels polarization effects otherwise caused by beamsplitting. 10 figs.
1994 Pacific Northwest Loads and Resources Study.
United States. Bonneville Power Administration.
1994-12-01
The 1994 Pacific Northwest Loads and Resources Study presented herein establishes a picture of how the agency is positioned today in its loads and resources balance. It is a snapshot of expected resource operation, contractual obligations, and rights. This study does not attempt to present or analyze future conservation or generation resource scenarios. What it does provide are base case assumptions from which scenarios encompassing a wide range of uncertainties about BPA`s future may be evaluated. The Loads and Resources Study is presented in two documents: (1) this summary of Federal system and Pacific Northwest region loads and resources and (2) a technical appendix detailing the loads and resources for each major Pacific Northwest generating utility. This analysis updates the 1993 Pacific Northwest Loads and Resources Study, published in December 1993. In this loads and resources study, resource availability is compared with a range of forecasted electricity consumption. The Federal system and regional analyses for medium load forecast are presented.
NASA Technical Reports Server (NTRS)
Warner, Edward P; Norton, F H
1920-01-01
Report embodies a description of the balance designed and constructed for the use of the National Advisory Committee for Aeronautics at Langley Field, and also deals with the theory of sensitivity of balances and with the errors to which wind tunnel balances of various types are subject.
ERIC Educational Resources Information Center
Larson, Bonnie
2001-01-01
Discusses coaching for balance the integration of the whole self: physical (body), intellectual (mind), spiritual (soul), and emotional (heart). Offers four ways to identify problems and tell whether someone is out of balance and four coaching techniques for creating balance. (Contains 11 references.) (JOW)
... and vision problems, and difficulty with concentration and memory. What is balance? Balance is the ability to maintain the body’s center of mass over its base of support. 1 A properly functioning balance system allows humans to see clearly while moving, identify orientation with ...
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
Reconceptualizing balance: attributes associated with balance performance.
Thomas, Julia C; Odonkor, Charles; Griffith, Laura; Holt, Nicole; Percac-Lima, Sanja; Leveille, Suzanne; Ni, Pensheng; Latham, Nancy K; Jette, Alan M; Bean, Jonathan F
2014-09-01
Balance tests are commonly used to screen for impairments that put older adults at risk for falls. The purpose of this study was to determine the attributes that were associated with balance performance as measured by the Frailty and Injuries: Cooperative Studies of Intervention Techniques (FICSIT) balance test. This study was a cross-sectional secondary analysis of baseline data from a longitudinal cohort study, the Boston Rehabilitative Impairment Study of the Elderly (Boston RISE). Boston RISE was performed in an outpatient rehabilitation research center and evaluated Boston area primary care patients aged 65 to 96 (N=364) with self-reported difficulty or task-modification climbing a flight of stairs or walking 1/2 of a mile. The outcome measure was standing balance as measured by the FICSIT-4 balance assessment. Other measures included: self-efficacy, pain, depression, executive function, vision, sensory loss, reaction time, kyphosis, leg range of motion, trunk extensor muscle endurance, leg strength and leg velocity at peak power. Participants were 67% female, had an average age of 76.5 (±7.0) years, an average of 4.1 (±2.0) chronic conditions, and an average FICSIT-4 score of 6.7 (±2.2) out of 9. After adjusting for age and gender, attributes significantly associated with balance performance were falls self-efficacy, trunk extensor muscle endurance, sensory loss, and leg velocity at peak power. FICSIT-4 balance performance is associated with a number of behavioral and physiologic attributes, many of which are amenable to rehabilitative treatment. Our findings support a consideration of balance as multidimensional activity as proposed by the current International Classification of Functioning, Disability, and Health (ICF) model.
Ohlinger, L.A.
1958-10-01
A device is presented for loading or charging bodies of fissionable material into a reactor. This device consists of a car, mounted on tracks, into which the fissionable materials may be placed at a remote area, transported to the reactor, and inserted without danger to the operating personnel. The car has mounted on it a heavily shielded magazine for holding a number of the radioactive bodies. The magazine is of a U-shaped configuration and is inclined to the horizontal plane, with a cap covering the elevated open end, and a remotely operated plunger at the lower, closed end. After the fissionable bodies are loaded in the magazine and transported to the reactor, the plunger inserts the body at the lower end of the magazine into the reactor, then is withdrawn, thereby allowing gravity to roll the remaining bodies into position for successive loading in a similar manner.
Concurrent algorithms for a mobile robot vision system
Jones, J.P.; Mann, R.C.
1988-01-01
The application of computer vision to mobile robots has generally been hampered by insufficient on-board computing power. The advent of VLSI-based general purpose concurrent multiprocessor systems promises to give mobile robots an increasing amount of on-board computing capability, and to allow computation intensive data analysis to be performed without high-bandwidth communication with a remote system. This paper describes the integration of robot vision algorithms on a 3-dimensional hypercube system on-board a mobile robot developed at Oak Ridge National Laboratory. The vision system is interfaced to navigation and robot control software, enabling the robot to maneuver in a laboratory environment, to find a known object of interest and to recognize the object's status based on visual sensing. We first present the robot system architecture and the principles followed in the vision system implementation. We then provide some benchmark timings for low-level image processing routines, describe a concurrent algorithm with load balancing for the Hough transform, a new algorithm for binary component labeling, and an algorithm for the concurrent extraction of region features from labeled images. This system analyzes a scene in less than 5 seconds and has proven to be a valuable experimental tool for research in mobile autonomous robots. 9 refs., 1 fig., 3 tabs.
A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database
Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; ...
2013-01-01
Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less
Parallelization of Nullspace Algorithm for the computation of metabolic pathways.
Jevremović, Dimitrije; Trinh, Cong T; Srienc, Friedrich; Sosa, Carlos P; Boley, Daniel
2011-06-01
Elementary mode analysis is a useful metabolic pathway analysis tool in understanding and analyzing cellular metabolism, since elementary modes can represent metabolic pathways with unique and minimal sets of enzyme-catalyzed reactions of a metabolic network under steady state conditions. However, computation of the elementary modes of a genome- scale metabolic network with 100-1000 reactions is very expensive and sometimes not feasible with the commonly used serial Nullspace Algorithm. In this work, we develop a distributed memory parallelization of the Nullspace Algorithm to handle efficiently the computation of the elementary modes of a large metabolic network. We give an implementation in C++ language with the support of MPI library functions for the parallel communication. Our proposed algorithm is accompanied with an analysis of the complexity and identification of major bottlenecks during computation of all possible pathways of a large metabolic network. The algorithm includes methods to achieve load balancing among the compute-nodes and specific communication patterns to reduce the communication overhead and improve efficiency.
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-01-01
unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. Currently, uncertainties associated with wind and load forecasts, as well as uncertainties associated with random generator outages and unexpected disconnection of supply lines, are not taken into account in power grid operation. Thus, operators have little means to weigh the likelihood and magnitude of upcoming events of power imbalance. In this project, funded by the U.S. Department of Energy (DOE), a framework has been developed for incorporating uncertainties associated with wind and load forecast errors, unpredicted ramps, and forced generation disconnections into the energy management system (EMS) as well as generation dispatch and commitment applications. A new approach to evaluate the uncertainty ranges for the required generation performance envelope including balancing capacity, ramping capability, and ramp duration has been proposed. The approach includes three stages: forecast and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence levels. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis, incorporating all sources of uncertainties of both continuous (wind and load forecast errors) and discrete (forced generator outages and start-up failures) nature. A new method called the “flying brick” technique has been developed to evaluate the look-ahead required generation performance envelope for the worst case scenario within a user-specified confidence level. A self-validation algorithm has been developed to validate the accuracy of the confidence intervals.
ERIC Educational Resources Information Center
Csernus, Marilyn
Carbohydrate loading is a frequently used technique to improve performance by altering an athlete's diet. The objective is to increase glycogen stored in muscles for use in prolonged strenuous exercise. For two to three days, the athlete consumes a diet that is low in carbohydrates and high in fat and protein while continuing to exercise and…
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Loading relativistic Maxwell distributions in particle simulations
Zenitani, Seiji
2015-04-15
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ≈50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Study and Analyses on the Structural Performance of a Balance
NASA Technical Reports Server (NTRS)
Karkehabadi, R.; Rhew, R. D.; Hope, D. J.
2004-01-01
Strain-gauge balances for use in wind tunnels have been designed at Langley Research Center (LaRC) since its inception. Currently Langley has more than 300 balances available for its researchers. A force balance is inherently a critically stressed component due to the requirements of measurement sensitivity. The strain-gauge balances have been used in Langley s wind tunnels for a wide variety of aerodynamic tests, and the designs encompass a large array of sizes, loads, and environmental effects. There are six degrees of freedom that a balance has to measure. The balance s task to measure these six degrees of freedom has introduced challenging work in transducer development technology areas. As the emphasis increases on improving aerodynamic performance of all types of aircraft and spacecraft, the demand for improved balances is at the forefront. Force balance stress analysis and acceptance criteria are under review due to LaRC wind tunnel operational safety requirements. This paper presents some of the analyses and research done at LaRC that influence structural integrity of the balances. The analyses are helpful in understanding the overall behavior of existing balances and can be used in the design of new balances to enhance performance. Initially, a maximum load combination was used for a linear structural analysis. When nonlinear effects were encountered, the analysis was extended to include nonlinearities using MSC.Nastran . Because most of the balances are designed using Pro/Mechanica , it is desirable and efficient to use Pro/Mechanica for stress analysis. However, Pro/Mechanica is limited to linear analysis. Both Pro/Mechanica and MSC.Nastran are used for analyses in the present work. The structural integrity of balances and the possibility of modifying existing balances to enhance structural integrity are investigated.
An improved scheduling algorithm for 3D cluster rendering with platform LSF
NASA Astrophysics Data System (ADS)
Xu, Wenli; Zhu, Yi; Zhang, Liping
2013-10-01
High-quality photorealistic rendering of 3D modeling needs powerful computing systems. On this demand highly efficient management of cluster resources develops fast to exert advantages. This paper is absorbed in the aim of how to improve the efficiency of 3D rendering tasks in cluster. It focuses research on a dynamic feedback load balance (DFLB) algorithm, the work principle of load sharing facility (LSF) and optimization of external scheduler plug-in. The algorithm can be applied into match and allocation phase of a scheduling cycle. Candidate hosts is prepared in sequence in match phase. And the scheduler makes allocation decisions for each job in allocation phase. With the dynamic mechanism, new weight is assigned to each candidate host for rearrangement. The most suitable one will be dispatched for rendering. A new plugin module of this algorithm has been designed and integrated into the internal scheduler. Simulation experiments demonstrate the ability of improved plugin module is superior to the default one for rendering tasks. It can help avoid load imbalance among servers, increase system throughput and improve system utilization.
Algorithms for parallel flow solvers on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1995-01-01
The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those
Mullett, L.B.; Loach, B.G.; Adams, G.L.
1958-06-24
>Loaded waveguides are described for the propagation of electromagnetic waves with reduced phase velocities. A rectangular waveguide is dimensioned so as to cut-off the simple H/sub 01/ mode at the operating frequency. The waveguide is capacitance loaded, so as to reduce the phase velocity of the transmitted wave, by connecting an electrical conductor between directly opposite points in the major median plane on the narrower pair of waveguide walls. This conductor may take a corrugated shape or be an aperature member, the important factor being that the electrical length of the conductor is greater than one-half wavelength at the operating frequency. Prepared for the Second U.N. International ConferThe importance of nuclear standards is duscussed. A brief review of the international callaboration in this field is given. The proposal is made to let the International Organization for Standardization (ISO) coordinate the efforts from other groups. (W.D.M.)
Efficient bulk-loading of gridfiles
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Nicol, David M.
1994-01-01
This paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows.
Maximizing TDRS Command Load Lifetime
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2002-01-01
The GNC software onboard ISS utilizes TORS command loads, and a simplistic model of TORS orbital motion to generate onboard TORS state vectors. Each TORS command load contains five "invariant" orbital elements which serve as inputs to the onboard propagation algorithm. These elements include semi-major axis, inclination, time of last ascending node crossing, right ascension of ascending node, and mean motion. Running parallel to the onboard software is the TORS Command Builder Tool application, located in the JSC Mission Control Center. The TORS Command Builder Tool is responsible for building the TORS command loads using a ground TORS state vector, mirroring the onboard propagation algorithm, and assessing the fidelity of current TORS command loads onboard ISS. The tool works by extracting a ground state vector at a given time from a current TORS ephemeris, and then calculating the corresponding "onboard" TORS state vector at the same time using the current onboard TORS command load. The tool then performs a comparison between these two vectors and displays the relative differences in the command builder tool GUI. If the RSS position difference between these two vectors exceeds the tolerable lim its, a new command load is built using the ground state vector and uplinked to ISS. A command load's lifetime is therefore defined as the time from when a command load is built to the time the RSS position difference exceeds the tolerable limit. From the outset of TORS command load operations (STS-98), command load lifetime was limited to approximately one week due to the simplicity of both the onboard propagation algorithm, and the algorithm used by the command builder tool to generate the invariant orbital elements. It was soon desired to extend command load lifetime in order to minimize potential risk due to frequent ISS commanding. Initial studies indicated that command load lifetime was most sensitive to changes in mean motion. Finding a suitable value for mean motion
An Efficient Distributed Algorithm for Constructing Spanning Trees in Wireless Sensor Networks
Lachowski, Rosana; Pellenz, Marcelo E.; Penna, Manoel C.; Jamhour, Edgard; Souza, Richard D.
2015-01-01
Monitoring and data collection are the two main functions in wireless sensor networks (WSNs). Collected data are generally transmitted via multihop communication to a special node, called the sink. While in a typical WSN, nodes have a sink node as the final destination for the data traffic, in an ad hoc network, nodes need to communicate with each other. For this reason, routing protocols for ad hoc networks are inefficient for WSNs. Trees, on the other hand, are classic routing structures explicitly or implicitly used in WSNs. In this work, we implement and evaluate distributed algorithms for constructing routing trees in WSNs described in the literature. After identifying the drawbacks and advantages of these algorithms, we propose a new algorithm for constructing spanning trees in WSNs. The performance of the proposed algorithm and the quality of the constructed tree were evaluated in different network scenarios. The results showed that the proposed algorithm is a more efficient solution. Furthermore, the algorithm provides multiple routes to the sensor nodes to be used as mechanisms for fault tolerance and load balancing. PMID:25594593
Linear and Nonlinear Analyses of a Wind-Tunnel Balance
NASA Technical Reports Server (NTRS)
Karkehabadi, R.; Rhew, R. D.
2004-01-01
The NASA Langley Research Center (LaRC) has been designing strain-gauge balances for utilization in wind tunnels since its inception. The utilization of balances span a wide variety of aerodynamic tests. A force balance is an inherently critically stressed component due to the requirements of measurement sensitivity. Force balance stress analysis and acceptance criteria are under review due to LaRC wind tunnel operational safety requirements. This paper presents some of the analyses done at NASA LaRC. Research and analyses were performed in order to investigate the structural integrity of the balances and better understand their performance. The analyses presented in this paper are helpful in understanding the overall behavior of an existing balance and can also be used in design of new balances to enhance their performance. As a first step, maximum load combination is used for linear structural analysis. When nonlinear effects are encountered, the analysis is extended to include the nonlinearities. Balance 1621 is typical for LaRC designed balances and was chosen for this study due to its traditional high load capacity, Figure 1. Maximum loading occurs when all 6 components are applied simultaneously with their maximum value allowed (limit load). This circumstance normally will not occur in the wind tunnel. However, if it occurs, is the balance capable of handling the loads with an acceptable factor of safety? Preliminary analysis using Pro/Mechanica indicated that this balance might experience nonlinearity. It was decided to analyze this balance by using NASTRAN so that a nonlinear analysis could be conducted. Balance 1621 was modeled and meshed in PATRAN for analysis in NASTRAN. The model from PATRAN/NASTRAN is compared to the one from Pro/Mechanica. For a complete analysis, it is necessary to consider all the load cases as well as use a dense mesh near all the edges. Because of computer limitations, it is not feasible to analyze model with the dense mesh near
Identifying Balance in a Balanced Scorecard System
ERIC Educational Resources Information Center
Aravamudhan, Suhanya; Kamalanabhan, T. J.
2007-01-01
In recent years, strategic management concepts seem to be gaining greater attention from the academicians and the practitioner's alike. Balanced Scorecard (BSC) concept is one such management concepts that has spread in worldwide business and consulting communities. The BSC translates mission and vision statements into a comprehensive set of…
Brusco, Michael; Steinley, Douglas
2010-06-01
Structural balance theory (SBT) has maintained a venerable status in the psychological literature for more than 5 decades. One important problem pertaining to SBT is the approximation of structural or generalized balance via the partitioning of the vertices of a signed graph into K clusters. This K-balance partitioning problem also has more general psychological applications associated with the analysis of similarity/dissimilarity relationships among stimuli. Accordingly, K-balance partitioning can be gainfully used in a wide variety of SBT applications, such as attraction and child development, evaluation of group membership, marketing and consumer issues, and other psychological contexts not necessarily related to SBT. We present a branch-and-bound algorithm for the K-balance partitioning problem. This new algorithm is applied to 2 synthetic numerical examples as well as to several real-world data sets from the behavioral sciences literature.
ERIC Educational Resources Information Center
La Porta, Rafael; Lopez-de-Silanes, Florencio; Pop-Eleches, Cristian; Shleifer, Andrei
2004-01-01
In the Anglo-American constitutional tradition, judicial checks and balances are often seen as crucial guarantees of freedom. Hayek distinguishes two ways in which the judiciary provides such checks and balances: judicial independence and constitutional review. We create a new database of constitutional rules in 71 countries that reflect these…
ERIC Educational Resources Information Center
Hines, Thomas E.
2011-01-01
Maintaining balance in leadership can be difficult because balance is affected by the personality, strengths, and attitudes of the leader as well as the complicated environment within and outside the community college itself. This article explores what being a leader at the community college means, what the threats are to effective leadership, and…
ERIC Educational Resources Information Center
Mosey, Edward
1991-01-01
The booming economy of the Pacific Northwest region promotes the dilemma of balancing the need for increased electrical power with the desire to maintain that region's unspoiled natural environment. Pertinent factors discussed within the balance equation are population trends, economic considerations, industrial power requirements, and…
ERIC Educational Resources Information Center
Blakley, G. R.
1982-01-01
Reviews mathematical techniques for solving systems of homogeneous linear equations and demonstrates that the algebraic method of balancing chemical equations is a matter of solving a system of homogeneous linear equations. FORTRAN programs using this matrix method to chemical equation balancing are available from the author. (JN)
NASA Astrophysics Data System (ADS)
Barrera-Garrido, Azael
2017-04-01
In order to measure the mass of an object in the absence of gravity, one useful tool for many decades has been the inertial balance. One of the simplest forms of inertial balance is made by two mass holders or pans joined together with two stiff metal plates, which act as springs.
Applications of concurrent neuromorphic algorithms for autonomous robots
NASA Technical Reports Server (NTRS)
Barhen, J.; Dress, W. B.; Jorgensen, C. C.
1988-01-01
This article provides an overview of studies at the Oak Ridge National Laboratory (ORNL) of neural networks running on parallel machines applied to the problems of autonomous robotics. The first section provides the motivation for our work in autonomous robotics and introduces the computational hardware in use. Section 2 presents two theorems concerning the storage capacity and stability of neural networks. Section 3 presents a novel load-balancing algorithm implemented with a neural network. Section 4 introduces the robotics test bed now in place. Section 5 concerns navigation issues in the test-bed system. Finally, Section 6 presents a frequency-coded network model and shows how Darwinian techniques are applied to issues of parameter optimization and on-line design.
An improved spectral graph partitioning algorithm for mapping parallel computations
Hendrickson, B.; Leland, R.
1992-09-01
Efficient use of a distributed memory parallel computer requires that the computational load be balanced across processors in a way that minimizes interprocessor communication. We present a new domain mapping algorithm that extends recent work in which ideas from spectral graph theory have been applied to this problem. Our generalization of spectral graph bisection involves a novel use of multiple eigenvectors to allow for division of a computation into four or eight parts at each stage of a recursive decomposition. The resulting method is suitable for scientific computations like irregular finite elements or differences performed on hypercube or mesh architecture machines. Experimental results confirm that the new method provides better decompositions arrived at more economically and robustly than with previous spectral methods. We have also improved upon the known spectral lower bound for graph bisection.
Wallace, B.
1991-01-01
This book discusses the radiation effects on Drosophila. It was originally thought that irradiating Drosophila would decrease the average fitness of the population, thereby leading to information about the detrimental effects of mutations. Surprisingly, the fitness of the irradiated population turned out to be higher than that of the control population. The original motivation for the experiment was as a test of genetic load theory. The average fitness of a population is depressed by deleterious alleles held in the population by the balance between mutation and natural selection. The depression is called the genetic load of the population. The load dose not depend on the magnitude of the deleterious effect of alleles, but only on the mutation rate.
Comparison of Building Energy Modeling Programs: Building Loads
Zhu, Dandan; Hong, Tianzhen; Yan, Da; Wang, Chuang
2012-06-01
identify the differences in solution algorithms, modeling assumptions and simplifications. Identifying inputs of each program and their default values or algorithms for load simulation was a critical step. These tend to be overlooked by users, but can lead to large discrepancies in simulation results. As weather data was an important input, weather file formats and weather variables used by each program were summarized. Some common mistakes in the weather data conversion process were discussed. ASHRAE Standard 140-2007 tests were carried out to test the fundamental modeling capabilities of the load calculations of the three BEMPs, where inputs for each test case were strictly defined and specified. The tests indicated that the cooling and heating load results of the three BEMPs fell mostly within the range of spread of results from other programs. Based on ASHRAE 140-2007 test results, the finer differences between DeST and EnergyPlus were further analyzed by designing and conducting additional tests. Potential key influencing factors (such as internal gains, air infiltration, convection coefficients of windows and opaque surfaces) were added one at a time to a simple base case with an analytical solution, to compare their relative impacts on load calculation results. Finally, special tests were designed and conducted aiming to ascertain the potential limitations of each program to perform accurate load calculations. The heat balance module was tested for both single and double zone cases. Furthermore, cooling and heating load calculations were compared between the three programs by varying the heat transfer between adjacent zones, the occupancy of the building, and the air-conditioning schedule.
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.
Combinatorial optimization methods for disassembly line balancing
NASA Astrophysics Data System (ADS)
McGovern, Seamus M.; Gupta, Surendra M.
2004-12-01
Disassembly takes place in remanufacturing, recycling, and disposal with a line being the best choice for automation. The disassembly line balancing problem seeks a sequence which: minimizes workstations, ensures similar idle times, and is feasible. Finding the optimal balance is computationally intensive due to factorial growth. Combinatorial optimization methods hold promise for providing solutions to the disassembly line balancing problem, which is proven to belong to the class of NP-complete problems. Ant colony optimization, genetic algorithm, and H-K metaheuristics are presented and compared along with a greedy/hill-climbing heuristic hybrid. A numerical study is performed to illustrate the implementation and compare performance. Conclusions drawn include the consistent generation of optimal or near-optimal solutions, the ability to preserve precedence, the speed of the techniques, and their practicality due to ease of implementation.
Balanced Multiwavelets Based Digital Image Watermarking
NASA Astrophysics Data System (ADS)
Zhang, Na; Huang, Hua; Zhou, Quan; Qi, Chun
In this paper, an adaptive blind watermarking algorithm based on balanced multiwavelets transform is proposed. According to the properties of balanced multiwavelets and human vision system, a modified version of the well-established Lewis perceptual model is given. Therefore, the strength of embedded watermark is controlled by the local properties of the host image .The subbands of balanced multiwavelets transformation are similar to each other in the same scale, so the most similar subbands are chosen to embed the watermark by modifying the relation of the two subbands adaptively under the model, the watermark extraction can be performed without original image. Experimental results show that the watermarked images look visually identical to the original ones, and the watermark also successfully survives after image processing operations such as image cropping, scaling, filtering and JPEG compression.
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-09-01
features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. In this report, a new methodology to predict the uncertainty ranges for the required balancing capacity, ramping capability and ramp duration is presented. Uncertainties created by system load forecast errors, wind and solar forecast errors, generation forced outages are taken into account. The uncertainty ranges are evaluated for different confidence levels of having the actual generation requirements within the corresponding limits. The methodology helps to identify system balancing reserve requirement based on a desired system performance levels, identify system “breaking points”, where the generation system becomes unable to follow the generation requirement curve with the user-specified probability level, and determine the time remaining to these potential events. The approach includes three stages: statistical and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence intervals. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis incorporating all sources of uncertainty and parameters of a continuous (wind forecast and load forecast errors) and discrete (forced generator outages and failures to start up) nature. Preliminary simulations using California Independent System Operator (California ISO) real life data have shown the effectiveness of the proposed approach. A tool developed based on the new methodology described in this report will be integrated with the California ISO systems. Contractual work is currently in place to integrate the tool with the AREVA EMS system.
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Active balance system and vibration balanced machine
NASA Technical Reports Server (NTRS)
Qiu, Songgang (Inventor); Augenblick, John E. (Inventor); Peterson, Allen A. (Inventor); White, Maurice A. (Inventor)
2005-01-01
An active balance system is provided for counterbalancing vibrations of an axially reciprocating machine. The balance system includes a support member, a flexure assembly, a counterbalance mass, and a linear motor or an actuator. The support member is configured for attachment to the machine. The flexure assembly includes at least one flat spring having connections along a central portion and an outer peripheral portion. One of the central portion and the outer peripheral portion is fixedly mounted to the support member. The counterbalance mass is fixedly carried by the flexure assembly along another of the central portion and the outer peripheral portion. The linear motor has one of a stator and a mover fixedly mounted to the support member and another of the stator and the mover fixedly mounted to the counterbalance mass. The linear motor is operative to axially reciprocate the counterbalance mass.
Parallelized FVM algorithm for three-dimensional viscoelastic flows
NASA Astrophysics Data System (ADS)
Dou, H.-S.; Phan-Thien, N.
A parallel implementation for the finite volume method (FVM) for three-dimensional (3D) viscoelastic flows is developed on a distributed computing environment through Parallel Virtual Machine (PVM). The numerical procedure is based on the SIMPLEST algorithm using a staggered FVM discretization in Cartesian coordinates. The final discretized algebraic equations are solved with the TDMA method. The parallelisation of the program is implemented by a domain decomposition strategy, with a master/slave style programming paradigm, and a message passing through PVM. A load balancing strategy is proposed to reduce the communications between processors. The three-dimensional viscoelastic flow in a rectangular duct is computed with this program. The modified Phan-Thien-Tanner (MPTT) constitutive model is employed for the equation system closure. Computing results are validated on the secondary flow problem due to non-zero second normal stress difference N2. Three sets of meshes are used, and the effect of domain decomposition strategies on the performance is discussed. It is found that parallel efficiency is strongly dependent on the grid size and the number of processors for a given block number. The convergence rate as well as the total efficiency of domain decomposition depends upon the flow problem and the boundary conditions. The parallel efficiency increases with increasing problem size for given block number. Comparing to two-dimensional flow problems, 3D parallelized algorithm has a lower efficiency owing to largely overlapped block interfaces, but the parallel algorithm is indeed a powerful means for large scale flow simulations.
Optimal Control Allocation with Load Sensor Feedback for Active Load Suppression
NASA Technical Reports Server (NTRS)
Miller, Christopher
2017-01-01
These slide sets describe the OCLA formulation and associated algorithms as a set of new technologies in the first practical application of load limiting flight control utilizing load feedback as a primary control measurement. Slide set one describes Experiment Development and slide set two describes Flight-Test Performance.
Query Optimization in Distributed Databases through Load Balancing.
1986-08-06
bound solution methods to the 0-1 integer programming problem. Although their approaches were more efficient, they still required about a second of...return to the value that it had before the large increase). We did -.- not carry out sufficiently long measurements to feel confident enough to draw any...easily implemented in such a system, we can be more confident that we can adapt them to less sophisticated DDBMS’s. Furthermore, a lot of effort has been
Loading concepts for Hoover Powerplant to optimize plant operating efficiency
Stitt, S.C.
1983-08-01
Plant efficiency gains that could be realized at Hoover Powerplant by the use of an algorithm to optimize plant efficiency are given. Comparisons are shown between the present plant operating conditions modeled on a digital computer, and the plant with the proposed unified bus operating under control of a GELA (Generator Efficiency Loading Algorithm) system. The basic concepts of that algorithm are given.
Consideration of Dynamical Balances
NASA Technical Reports Server (NTRS)
Errico, Ronald M.
2015-01-01
The quasi-balance of extra-tropical tropospheric dynamics is a fundamental aspect of nature. If an atmospheric analysis does not reflect such balance sufficiently well, the subsequent forecast will exhibit unrealistic behavior associated with spurious fast-propagating gravity waves. Even if these eventually damp, they can create poor background fields for a subsequent analysis or interact with moist physics to create spurious precipitation. The nature of this problem will be described along with the reasons for atmospheric balance and techniques for mitigating imbalances. Attention will be focused on fundamental issues rather than on recipes for various techniques.
NASA Technical Reports Server (NTRS)
1996-01-01
NeuroCom's Balance Master is a system to assess and then retrain patients with balance and mobility problems and is used in several medical centers. NeuroCom received assistance in research and funding from NASA, and incorporated technology from testing mechanisms for astronauts after shuttle flights. The EquiTest and Balance Master Systems are computerized posturography machines that measure patient responses to movement of a platform on which the subject is standing or sitting, then provide assessments of the patient's postural alignment and stability.
Forbes, G.B.; Lantigua, R.; Amatruda, J.M.; Lockwood, D.H.
1981-01-01
Six overweight adult subjects given a low calorie diet containing adequate amounts of nitrogen but subnormal amounts of potassium (K) were observed on the Clinical Research Center for periods of 29 to 40 days. Metabolic balance of potassium was measured together with frequent assays of total body K by /sup 40/K counting. Metabolic K balance underestimated body K losses by 11 to 87% (average 43%): the intersubject variability is such as to preclude the use of a single correction value for unmeasured losses in K balance studies.
Parallel-vector algorithms for particle simulations on shared-memory multiprocessors
Nishiura, Daisuke; Sakaguchi, Hide
2011-03-01
Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.
The Challenge is to develop ideas for how NASA can turn available entry, descent, and landing balance mass on a future Mars mission into a scientific or technological payload. Proposed concepts sho...
Strength and Balance Exercises
... Venous Thromboembolism Aortic Aneurysm More Strength and Balance Exercises Updated:Sep 8,2016 If you have medical ... if you have been inactive and want to exercise vigorously, check with your doctor before beginning a ...
ERIC Educational Resources Information Center
Lee, Chris
1991-01-01
Describes the responses of some companies to increasing demands for family-work balance in terms of flexibility in working hours and leave policies, child care, and fringe benefits. Identifies some of the effects on the "bottom line." (SK)
ERIC Educational Resources Information Center
Willows, Dale
2002-01-01
Describes professional development program in Ontario school district to improve student reading and writing skills. Program used food-pyramid concepts to help teacher learn to provide a balanced and flexible approach to literacy instruction based on student needs. (PKP)
NASA Technical Reports Server (NTRS)
1991-01-01
Researchers at the Balance Function Laboratory and Clinic at the Minneapolis (MN) Neuroscience Institute on the Abbot Northwestern Hospital Campus are using a rotational chair (technically a "sinusoidal harmonic acceleration system") originally developed by NASA to investigate vestibular (inner ear) function in weightlessness to diagnose and treat patients with balance function disorders. Manufactured by ICS Medical Corporation, Schaumberg, IL, the chair system turns a patient and monitors his or her responses to rotational stimulation.
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Wood, David; Sorensen, Stephen E.
1996-12-01
This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Hill, James O.; Wyatt, Holly R.; Peters, John C.
2012-01-01
This paper describes the interplay among energy intake, energy expenditure and body energy stores and illustrates how an understanding of energy balance can help develop strategies to reduce obesity. First, reducing obesity will require modifying both energy intake and energy expenditure and not simply focusing on either alone. Food restriction alone will not be effective in reducing obesity if human physiology is biased toward achieving energy balance at a high energy flux (i.e. at a high level of energy intake and expenditure). In previous environments a high energy flux was achieved with a high level of physical activity but in today's sedentary environment it is increasingly achieved through weight gain. Matching energy intake to a high level of energy expenditure will likely be more a more feasible strategy for most people to maintain a healthy weight than restricting food intake to meet a low level of energy expenditure. Second, from an energy balance point of view we are likely to be more successful in preventing excessive weight gain than in treating obesity. This is because the energy balance system shows much stronger opposition to weight loss than to weight gain. While large behavior changes are needed to produce and maintain reductions in body weight, small behavior changes may be sufficient to prevent excessive weight gain. In conclusion, the concept of energy balance combined with an understanding of how the body achieves balance may be a useful framework in helping develop strategies to reduce obesity rates. PMID:22753534
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
NASA Astrophysics Data System (ADS)
Robinson, Ian A.
2014-04-01
The time is fast approaching when the SI unit of mass will cease to be based on a single material artefact and will instead be based upon the defined value of a fundamental constant—the Planck constant—h . This change requires that techniques exist both to determine the appropriate value to be assigned to the constant, and to measure mass in terms of the redefined unit. It is important to ensure that these techniques are accurate and reliable to allow full advantage to be taken of the stability and universality provided by the new definition and to guarantee the continuity of the world's mass measurements, which can affect the measurement of many other quantities such as energy and force. Up to now, efforts to provide the basis for such a redefinition of the kilogram were mainly concerned with resolving the discrepancies between individual implementations of the two principal techniques: the x-ray crystal density (XRCD) method [1] and the watt and joule balance methods which are the subject of this special issue. The first three papers report results from the NRC and NIST watt balance groups and the NIM joule balance group. The result from the NRC (formerly the NPL Mk II) watt balance is the first to be reported with a relative standard uncertainty below 2 × 10-8 and the NIST result has a relative standard uncertainty below 5 × 10-8. Both results are shown in figure 1 along with some previous results; the result from the NIM group is not shown on the plot but has a relative uncertainty of 8.9 × 10-6 and is consistent with all the results shown. The Consultative Committee for Mass and Related Quantities (CCM) in its meeting in 2013 produced a resolution [2] which set out the requirements for the number, type and quality of results intended to support the redefinition of the kilogram and required that there should be agreement between them. These results from NRC, NIST and the IAC may be considered to meet these requirements and are likely to be widely debated
Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert
2013-01-01
Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.
Load Control System Reliability
Trudnowski, Daniel
2015-04-03
This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”
NASA Technical Reports Server (NTRS)
Thompson, Bryan
2000-01-01
This is the final report for a project carried out to modify a manual commercial Cavendish Balance for automated use in cryostat. The scope of this project was to modify an off-the-shelf manually operated Cavendish Balance to allow for automated operation for periods of hours or days in cryostat. The purpose of this modification was to allow the balance to be used in the study of effects of superconducting materials on the local gravitational field strength to determine if the strength of gravitational fields can be reduced. A Cavendish Balance was chosen because it is a fairly simple piece of equipment for measuring gravity, one the least accurately known and least understood physical constants. The principle activities that occurred under this purchase order were: (1) All the components necessary to hold and automate the Cavendish Balance in a cryostat were designed. Engineering drawings were made of custom parts to be fabricated, other off-the-shelf parts were procured; (2) Software was written in LabView to control the automation process via a stepper motor controller and stepper motor, and to collect data from the balance during testing; (3)Software was written to take the data collected from the Cavendish Balance and reduce it to give a value for the gravitational constant; (4) The components of the system were assembled and fitted to a cryostat. Also the LabView hardware including the control computer, stepper motor driver, data collection boards, and necessary cabling were assembled; and (5) The system was operated for a number of periods, data collected, and reduced to give an average value for the gravitational constant.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
Using Process Load Cell Information for IAEA Safeguards at Enrichment Plants
Laughter, Mark D; Whitaker, J Michael; Howell, John
2010-01-01
Uranium enrichment service providers are expanding existing enrichment plants and constructing new facilities to meet demands resulting from the shutdown of gaseous diffusion plants, the completion of the U.S.-Russia highly enriched uranium downblending program, and the projected global renaissance in nuclear power. The International Atomic Energy Agency (IAEA) conducts verification inspections at safeguarded facilities to provide assurance that signatory States comply with their treaty obligations to use nuclear materials only for peaceful purposes. Continuous, unattended monitoring of load cells in UF{sub 6} feed/withdrawal stations can provide safeguards-relevant process information to make existing safeguards approaches more efficient and effective and enable novel safeguards concepts such as information-driven inspections. The IAEA has indicated that process load cell monitoring will play a central role in future safeguards approaches for large-scale gas centrifuge enrichment plants. This presentation will discuss previous work and future plans related to continuous load cell monitoring, including: (1) algorithms for automated analysis of load cell data, including filtering methods to determine significant weights and eliminate irrelevant impulses; (2) development of metrics for declaration verification and off-normal operation detection ('cylinder counting,' near-real-time mass balancing, F/P/T ratios, etc.); (3) requirements to specify what potentially sensitive data is safeguards relevant, at what point the IAEA gains on-site custody of the data, and what portion of that data can be transmitted off-site; (4) authentication, secure on-site storage, and secure transmission of load cell data; (5) data processing and remote monitoring schemes to control access to sensitive and proprietary information; (6) integration of process load cell data in a layered safeguards approach with cross-check verification; (7) process mock-ups constructed to provide simulated
Balanced ultrafiltration: inflammatory mediator removal capacity.
Guan, Yulong; Wan, Caihong; Wang, Shigang; Sun, Peng; Long, Cun
2012-10-01
Ultrafiltration with a hemoconcentrator may remove excess fluid load and alleviate tissue edema and has been universally adopted in extracorporeal circulation protocols during pediatric cardiac surgery. Balanced ultrafiltration is advocated to remove inflammatory mediators generated during surgery. However, whether balanced ultrafiltration can remove all or a portion of the inflammatory mediator load remains unclear. The inflammatory mediator removal capacity of zero-balanced ultrafiltration was measured during pediatric extracorporeal circulation in vitro. Extracorporeal circulation was composed of cardiotomy reservoir, D902 Lilliput 2 membrane oxygenator, and Capiox AF02 pediatric arterial line filter. The Hemoconcentrator BC 20 plus was placed between arterial purge line and oxygenator venous reservoir. Fresh donor human whole blood was added into the circuit and mixed with Ringer's solution to obtain a final hematocrit of 24-28%. After 2 h of extracorporeal circulation, zero-balanced ultrafiltration was initiated and arterial line pressure was maintained at approximately 100 mmHg with Hoffman clamp. The rate of ultrafiltration (12 mL/min) was controlled by ultrafiltrate outlet pressure. Identical volume of plasmaslyte A was dripped into the circuit to maintain stable hematocrit during the 45 min of the experiment. Plasma and ultrafiltrate samples were drawn every 5 min, and concentrations of inflammatory mediators including interleukin-1β (IL-1β), IL-6, IL-10, neutrophil elastase (NE), and tumor necrosis factor-α (TNF-α) were measured. All assayed inflammatory mediators were detected in the ultrafiltrate, demonstrating that the ultrafiltrator may remove inflammatory mediators. However, dynamic observations suggested that the concentration of NE was highest among the five inflammatory mediators in both plasma and ultrafiltrate (P < 0.001). IL-1β had the lowest concentration in plasma, whereas the concentration of TNF-α was the lowest in ultrafiltrate (P
2006-03-01
by the fatigue loading. The specimen is then attached to the fixture by three hardened-steel pins at each end. The material properties of 15 - 5PH ...1600 2000 σ (M Pa ) 2024-T3 aluminum 15 - 5PH stainless steel Fig. 4 Strain hardening curves for the specimen (Al 2024-T3) and fixture ( 15 - 5PH stainless...pair grips) are shown in Fig. 16. The fixture is made of 15 - 5PH stainless steel and has a thickness of 12 12.6 mm. The specimen is made of aluminum
NASA Technical Reports Server (NTRS)
Holliday, Ezekiel S. (Inventor)
2014-01-01
Vibrations of a principal machine are reduced at the fundamental and harmonic frequencies by driving the drive motor of an active balancer with balancing signals at the fundamental and selected harmonics. Vibrations are sensed to provide a signal representing the mechanical vibrations. A balancing signal generator for the fundamental and for each selected harmonic processes the sensed vibration signal with adaptive filter algorithms of adaptive filters for each frequency to generate a balancing signal for each frequency. Reference inputs for each frequency are applied to the adaptive filter algorithms of each balancing signal generator at the frequency assigned to the generator. The harmonic balancing signals for all of the frequencies are summed and applied to drive the drive motor. The harmonic balancing signals drive the drive motor with a drive voltage component in opposition to the vibration at each frequency.
Zemková, Erika
2014-05-01
This review includes the latest findings based on experimental studies addressing sport-specific balance, an area of research that has grown dramatically in recent years. The main objectives of this work were to investigate the postural sway response to different forms of exercise under laboratory and sport-specific conditions, to examine how this effect can vary with expertise, and to provide examples of the association of impaired balance with sport performance and/or increasing risk of injury. In doing so, sports where body balance is one of the limiting factors of performance were analyzed. While there are no significant differences in postural stability between athletes of different specializations and physically active individuals during standing in a standard upright position (e.g., bipedal stance), they have a better ability to maintain balance in specific conditions (e.g., while standing on a narrow area of support). Differences in magnitude of balance impairment after specific exercises (rebound jumps, repeated rotations, etc.) and mainly in speed of its readjustment to baseline are also observed. Besides some evidence on an association of greater postural sway with the increasing risk of injuries, there are many myths related to the negative influence of impaired balance on sport performance. Though this may be true for shooting or archery, findings have shown that in many other sports, highly skilled athletes are able to perform successfully in spite of increased postural sway. These findings may contribute to better understanding of the postural control system under various performance requirements. It may provide useful knowledge for designing training programs for specific sports.
Calibration Variable Selection and Natural Zero Determination for Semispan and Canard Balances
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
Independent calibration variables for the characterization of semispan and canard wind tunnel balances are discussed. It is shown that the variable selection for a semispan balance is determined by the location of the resultant normal and axial forces that act on the balance. These two forces are the first and second calibration variable. The pitching moment becomes the third calibration variable after the normal and axial forces are shifted to the pitch axis of the balance. Two geometric distances, i.e., the rolling and yawing moment arms, are the fourth and fifth calibration variable. They are traditionally substituted by corresponding moments to simplify the use of calibration data during a wind tunnel test. A canard balance is related to a semispan balance. It also only measures loads on one half of a lifting surface. However, the axial force and yawing moment are of no interest to users of a canard balance. Therefore, its calibration variable set is reduced to the normal force, pitching moment, and rolling moment. The combined load diagrams of the rolling and yawing moment for a semispan balance are discussed. They may be used to illustrate connections between the wind tunnel model geometry, the test section size, and the calibration load schedule. Then, methods are reviewed that may be used to obtain the natural zeros of a semispan or canard balance. In addition, characteristics of three semispan balance calibration rigs are discussed. Finally, basic requirements for a full characterization of a semispan balance are reviewed.
NASA LaRC Strain Gage Balance Design Concepts
NASA Technical Reports Server (NTRS)
Rhew, Ray D.
1999-01-01
The NASA Langley Research Center (LaRC) has been designing strain-gage balances for more than fifty years. These balances have been utilized in Langley's wind tunnels, which span over a wide variety of aerodynamic test regimes, as well as other ground based test facilities and in space flight applications. As a result, the designs encompass a large array of sizes, loads, and environmental effects. Currently Langley has more than 300 balances available for its researchers. This paper will focus on the design concepts for internal sting mounted strain-gage balances. However, these techniques can be applied to all force measurement design applications. Strain-gage balance concepts that have been developed over the years including material selection, sting, model interfaces, measuring, sections, fabrication, strain-gaging and calibration will be discussed.
Constant current load matches impedances of electronic components
NASA Technical Reports Server (NTRS)
Alexander, R. M.
1970-01-01
Constant current load with negative resistance characteristics actively compensates for impedance variations in circuit components. Through a current-voltage balancing operation the internal impedance of the diodes is maintained at a constant value. This constant current load circuit can be used in simple telemetry systems.
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor)
2003-01-01
A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.
NASA Technical Reports Server (NTRS)
Malcolm, G. N.
1981-01-01
Two wind tunnel techniques for determining part of the aerodynamic information required to describe the dynamic bahavior of various types of vehicles in flight are described. Force and moment measurements are determined with a rotary-balance apparatus in a coning motion and with a Magnus balance in a high-speed spinning motion. Coning motion is pertinent to both aircraft and missiles, and spinning is important for spin stabilized missiles. Basic principles of both techniques are described, and specific examples of each type of apparatus are presented. Typical experimental results are also discussed.
Development of the NTF-117S Semi-Span Balance
NASA Technical Reports Server (NTRS)
Lynn, Keith C.
2010-01-01
A new high-capacity semi-span force and moment balance has recently been developed for use at the National Transonic Facility at the NASA Langley Research Center. This new semi-span balance provides the NTF a new measurement capability that will support testing of semi-span test models at transonic high-lift testing regimes. Future testing utilizing this new balance capability will include active circulation control and propulsion simulation testing of semi-span transonic wing models. The NTF has recently implemented a new highpressure air delivery station that will provide both high and low mass flow pressure lines that are routed out to the semi-span models via a set high/low pressure bellows that are indirectly linked to the metric end of the NTF-117S balance. A new check-load stand is currently being developed to provide the NTF with an in-house capability that will allow for performing check-loads on the NTF-117S balance in order to determine the pressure tare affects on the overall performance of the balance. An experimental design is being developed that will allow for experimentally assessing the static pressure tare affects on the balance performance.
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
Arampatzis, Giorgos; Katsoulakis, Markos A.; Plechac, Petr; Taufer, Michela; Xu, Lifan
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decomposition corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing
Offshore tanker loading system
Baan, J. de; Heijst, W.J. van.
1994-01-04
The present invention relates to an improved flexible loading system which provides fluid communication between a subsea pipeline and a surface vessel including a hose extending from the subsea pipeline to a first buoyancy tank, a second hose extending from the first buoyancy tank to a central buoyancy tank, a second buoyancy tank, means connecting said second buoyancy tank to the sea floor and to the central buoyancy tank whereby the forces exerted on said central buoyant tank by said second hose and said connecting means are balanced to cause said central buoyancy tank to maintain a preselected position, a riser section extending upwardly from said central buoyancy tank and means on the upper termination for engagement by a vessel on the surface to raise said upper termination onto the vessel to complete the communication for moving fluids between the subsea pipeline and the vessel. In one form the means for connecting between the sea floor to the second buoyancy tank includes an anchor on the sea floor and lines extending from the anchor to the second buoyancy tank and from the second buoyancy tank to the central buoyancy tank. In another form of the invention the means for connecting is a third hose extending from a second subsea pipeline to the second buoyancy tank and a fourth hose extending from the second buoyancy tank to the central buoyancy tank. The central buoyancy tank is preferred to be maintained at a level below the water surface which allows full movement of the vessel while connected to the riser section. A swivel may be positioned in the riser section and a pressure relief system may be included in the loading system to protect it from sudden excess pressures. 17 figs.
The Balanced Billing Cycle Vehicle Routing Problem
Groer, Christopher S; Golden, Bruce; Edward, Wasil
2009-01-01
Utility companies typically send their meter readers out each day of the billing cycle in order to determine each customer s usage for the period. Customer churn requires the utility company to periodically remove some customer locations from its meter-reading routes. On the other hand, the addition of new customers and locations requires the utility company to add newstops to the existing routes. A utility that does not adjust its meter-reading routes over time can find itself with inefficient routes and, subsequently, higher meter-reading costs. Furthermore, the utility can end up with certain billing days that require substantially larger meter-reading resources than others. However, remedying this problem is not as simple as it may initially seem. Certain regulatory and customer service considerations can prevent the utility from shifting a customer s billing day by more than a few days in either direction. Thus, the problem of reducing the meterreading costs and balancing the workload can become quite difficult. We describe this Balanced Billing Cycle Vehicle Routing Problem in more detail and develop an algorithm for providing solutions to a slightly simplified version of the problem. Our algorithm uses a combination of heuristics and integer programming via a three-stage algorithm. We discuss the performance of our procedure on a real-world data set.
Optimized Loading for Particle-in-cell Gyrokinetic Simulations
J.L.V. Lewandowski
2004-05-13
The problem of particle loading in particle-in-cell gyrokinetic simulations is addressed using a quadratic optimization algorithm. Optimized loading in configuration space dramatically reduces the short wavelength modes in the electrostatic potential that are partly responsible for the non-conservation of total energy; further, the long wavelength modes are resolved with good accuracy. As a result, the conservation of energy for the optimized loading is much better that the conservation of energy for the random loading. The method is valid for any geometry and can be coupled to optimization algorithms in velocity space.
Stochastic solution of population balance equations for reactor networks
Menz, William J.; Akroyd, Jethro; Kraft, Markus
2014-01-01
This work presents a sequential modular approach to solve a generic network of reactors with a population balance model using a stochastic numerical method. Full-coupling to the gas-phase is achieved through operator-splitting. The convergence of the stochastic particle algorithm in test networks is evaluated as a function of network size, recycle fraction and numerical parameters. These test cases are used to identify methods through which systematic and statistical error may be reduced, including by use of stochastic weighted algorithms. The optimal algorithm was subsequently used to solve a one-dimensional example of silicon nanoparticle synthesis using a multivariate particle model. This example demonstrated the power of stochastic methods in resolving particle structure by investigating the transient and spatial evolution of primary polydispersity, degree of sintering and TEM-style images. Highlights: •An algorithm is presented to solve reactor networks with a population balance model. •A stochastic method is used to solve the population balance equations. •The convergence and efficiency of the reported algorithms are evaluated. •The algorithm is applied to simulate silicon nanoparticle synthesis in a 1D reactor. •Particle structure is reported as a function of reactor length and time.
Single-Vector Calibration of Wind-Tunnel Force Balances
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2003-01-01
An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is
ERIC Educational Resources Information Center
Lewis, Tamika; Mobley, Mary; Huttenlock, Daniel
2013-01-01
It's the season for the job hunt, whether one is looking for their first job or taking the next step along their career path. This article presents first-person accounts to see how teachers balance the rewards and challenges of working in different types of schools. Tamica Lewis, a third-grade teacher, states that faculty at her school is…
ERIC Educational Resources Information Center
Yahnke, Sally; And Others
The purpose of this monograph is to present a series of activities designed to teach strategies needed for effectively managing the multiple responsibilities of family and work. The guide contains 11 lesson plans dealing with balancing family and work that can be used in any home economics class, from middle school through college. The lesson…
Balancing Your Evaluation Act.
ERIC Educational Resources Information Center
Willyerd, Karie A.
1997-01-01
Looks at different performance-measurement tools than can ensure that a training or performance solution is strategically aligned, objectively evaluated, and quantitatively measured for results. Suggests aiming for a balance among the financial, customer, and internal perspectives and the innovation and learning that can result. (Author/JOW)
Bialas, A.
2011-02-15
The idea of glue clusters, i.e., short-range correlations in the quark-gluon plasma close to freeze-out, is used to estimate the width of balance functions in momentum space. A good agreement is found with the recent measurements of the STAR Collaboration for central Au-Au collisions.
ERIC Educational Resources Information Center
Bray, George A.
1985-01-01
Explains relationships between energy intake and expenditure focusing on the cellular, chemical and neural mechanisms involved in regulation of energy balance. Information is referenced specifically to conditions of obesity. (Physicians may earn continuing education credit by completing an appended test). (ML)
ERIC Educational Resources Information Center
Gordon, Milton A.; Gordon, Margaret F.
1996-01-01
New college presidents are inundated with requests for their time, and their private life is often sacrificed. Each administrator must decide what is the appropriate balance among various aspects of his/her position. Physical separation of public and private lives is essential, and the role of the spouse, who may have other professional…
ERIC Educational Resources Information Center
Our Children, 1997
1997-01-01
Changes in the workplace that would provide flexibility for working parents are slowly developing and receiving government, business, and societal attention. A sidebar, "Mother, Professional, Volunteer: One Woman's Balancing Act," presents an account of how one woman rearranged her professional life to enable her to do full-time…
Maintaining an Environmental Balance
ERIC Educational Resources Information Center
Environmental Science and Technology, 1976
1976-01-01
A recent conference of the National Environmental Development Association focused on the concepts of environment, energy and economy and underscored the necessity for balancing the critical needs embodied in these issues. Topics discussed included: nuclear energy and wastes, water pollution control, federal regulations, environmental technology…
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
NASA Astrophysics Data System (ADS)
Slattery, Stuart R.
2016-02-01
In this paper we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. These scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Power system very short-term load prediction
Trudnowski, D.J.; Johnson, J.M.; Whitney, P.
1997-02-01
A fundamental objective of a power-system operating and control scheme is to maintain a match between the system`s overall real-power load and generation. To accurately maintain this match, modern energy management systems require estimates of the future total system load. Several strategies and tools are available for estimating system load. Nearly all of these estimate the future load in 1-hour steps over several hours (or time frames very close to this). While hourly load estimates are very useful for many operation and control decisions, more accurate estimates at closer intervals would also be valuable. This is especially true for emerging Area Generation Control (AGC) strategies such as look-ahead AGC. For these short-term estimation applications, future load estimates out to several minutes at intervals of 1 to 5 minutes are required. The currently emerging operation and control strategies being developed by the BPA are dependent on accurate very short-term load estimates. To meet this need, the BPA commissioned the Pacific Northwest National Laboratory (PNNL) and Montana Tech (an affiliate of the University of Montana) to develop an accurate load prediction algorithm and computer codes that automatically update and can reliably perform in a closed-loop controller for the BPA system. The requirements include accurate load estimation in 5-minute steps out to 2 hours. This report presents the results of this effort and includes: a methodology and algorithms for short-term load prediction that incorporates information from a general hourly forecaster; specific algorithm parameters for implementing the predictor in the BPA system; performance and sensitivity studies of the algorithms on BPA-supplied data; an algorithm for filtering power system load samples as a precursor to inputting into the predictor; and FORTRAN 77 subroutines for implementing the algorithms.
On the relationship between wind profiles and the STS ascent structural loads
NASA Technical Reports Server (NTRS)
Smith, Orvel E.; Adelfang, Stanley I.; Whitehead, Douglas S.
1989-01-01
The response of STS ascent structural load indicators to the wind profile is analyzed. The load indicator values versus Mach numbers are calculated with algorithms using trajectory information. The ascent load minimum margin concept is used to show that the detailed wind profile structure measured by the Jimsphere wind system is not needed to assess the STS rigid body structural wind loads.
An efficient parallel termination detection algorithm
Baker, A. H.; Crivelli, S.; Jessup, E. R.
2004-05-27
Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
Kral, Ulrich; Lin, Chih-Yi; Kellner, Katharina; Ma, Hwong-wen; Brunner, Paul H
2014-01-01
Material management faces a dual challenge: on the one hand satisfying large and increasing demands for goods and on the other hand accommodating wastes and emissions in sinks. Hence, the characterization of material flows and stocks is relevant for both improving resource efficiency and environmental protection. This article focuses on the urban scale, a dimension rarely investigated in past metal flow studies. We compare the copper (Cu) metabolism of two cities in different economic states, namely, Vienna (Europe) and Taipei (Asia). Substance flow analysis is used to calculate urban Cu balances in a comprehensive and transparent form. The main difference between Cu in the two cities appears to be the stock: Vienna seems close to saturation with 180 kilograms per capita (kg/cap) and a growth rate of 2% per year. In contrast, the Taipei stock of 30 kg/cap grows rapidly by 26% per year. Even though most Cu is recycled in both cities, bottom ash from municipal solid waste incineration represents an unused Cu potential accounting for 1% to 5% of annual demand. Nonpoint emissions are predominant; up to 50% of the loadings into the sewer system are from nonpoint sources. The results of this research are instrumental for the design of the Cu metabolism in each city. The outcomes serve as a base for identification and recovery of recyclables as well as for directing nonrecyclables to appropriate sinks, avoiding sensitive environmental pathways. The methodology applied is well suited for city benchmarking if sufficient data are available. PMID:25866460
OVERVIEW AND STATUS OF LAKE MICHIGAN MASS BALANCE MODELLING PROJECT
With most of the data available from the Lake Michigan Mass Balance Project field program, the modeling efforts have begun in earnest. The tributary and atmospheric load estimates are or will be completed soon, so realistic simulations for calibration are beginning. A Quality Ass...
A Force Transducer from a Junk Electronic Balance
ERIC Educational Resources Information Center
Aguilar, Horacio Munguia; Aguilar, Francisco Armenta
2009-01-01
It is shown how the load cell from a junk electronic balance can be used as a force transducer for physics experiments. Recovering this device is not only an inexpensive way of getting a valuable laboratory tool but also very useful didactic work on electronic instrumentation. Some experiments on mechanics with this transducer are possible after a…
Balance and Ensemble Kalman Filter Localization Techniques
2010-01-01
For the serial EnSRF ( Whitaker and Hamill, 2002), localization by a distance-dependent function is performed upon BHT, where each element...localization in terms of balance and accuracy. Here, B-localization is employed with the EnSRF algorithm ( Whitaker and Hamill, 2002), whereas R- localization...can be specified as follows: 2 1 K K 1111 12 11 RB B B (A1) where Bij is the background covariance between
NASA Astrophysics Data System (ADS)
Rerucha, Simon; Sarbort, Martin; Hola, Miroslava; Cizek, Martin; Hucl, Vaclav; Cip, Ondrej; Lazar, Josef
2016-12-01
The homodyne detection with only a single detector represents a promising approach in the interferometric application which enables a significant reduction of the optical system complexity while preserving the fundamental resolution and dynamic range of the single frequency laser interferometers. We present the design, implementation and analysis of algorithmic methods for computational processing of the single-detector interference signal based on parallel pipelined processing suitable for real time implementation on a programmable hardware platform (e.g. the FPGA - Field Programmable Gate Arrays or the SoC - System on Chip). The algorithmic methods incorporate (a) the single detector signal (sine) scaling, filtering, demodulations and mixing necessary for the second (cosine) quadrature signal reconstruction followed by a conic section projection in Cartesian plane as well as (a) the phase unwrapping together with the goniometric and linear transformations needed for the scale linearization and periodic error correction. The digital computing scheme was designed for bandwidths up to tens of megahertz which would allow to measure the displacements at the velocities around half metre per second. The algorithmic methods were tested in real-time operation with a PC-based reference implementation that employed the advantage pipelined processing by balancing the computational load among multiple processor cores. The results indicate that the algorithmic methods are suitable for a wide range of applications [3] and that they are bringing the fringe counting interferometry closer to the industrial applications due to their optical setup simplicity and robustness, computational stability, scalability and also a cost-effectiveness.
Para-GMRF: parallel algorithm for anomaly detection of hyperspectral image
NASA Astrophysics Data System (ADS)
Dong, Chao; Zhao, Huijie; Li, Na; Wang, Wei
2007-12-01
The hyperspectral imager is capable of collecting hundreds of images corresponding to different wavelength channels for the observed area simultaneously, which make it possible to discriminate man-made objects from natural background. However, the price paid for the wealthy information is the enormous amounts of data, usually hundreds of Gigabytes per day. Turning the huge volume data into useful information and knowledge in real time is critical for geoscientists. In this paper, the proposed parallel Gaussian-Markov random field (Para-GMRF) anomaly detection algorithm is an attempt of applying parallel computing technology to solve the problem. Based on the locality of GMRF algorithm, we partition the 3-D hyperspectral image cube in spatial domain and distribute data blocks to multiple computers for concurrent detection. Meanwhile, to achieve load balance, a work pool scheduler is designed for task assignment. The Para-GMRF algorithm is organized in master-slave architecture, coded in C programming language using message passing interface (MPI) library and tested on a Beowulf cluster. Experimental results show that Para-GMRF algorithm successfully conquers the challenge and can be used in time sensitive areas, such as environmental monitoring and battlefield reconnaissance.
Experimental performance evaluation of human balance control models.
Huryn, Thomas P; Blouin, Jean-Sébastien; Croft, Elizabeth A; Koehle, Michael S; Van der Loos, H F Machiel
2014-11-01
Two factors commonly differentiate proposed balance control models for quiet human standing: 1) intermittent muscle activation and 2) prediction that overcomes sensorimotor time delays. In this experiment we assessed the viability and performance of intermittent activation and prediction in a balance control loop that included the neuromuscular dynamics of human calf muscles. Muscles were driven by functional electrical stimulation (FES). The performance of the different controllers was compared based on sway patterns and mechanical effort required to balance a human body load on a robotic balance simulator. All evaluated controllers balanced subjects with and without a neural block applied to their common peroneal and tibial nerves, showing that the models can produce stable balance in the absence of natural activation. Intermittent activation required less stimulation energy than continuous control but predisposed the system to increased sway. Relative to intermittent control, continuous control reproduced the sway size of natural standing better. Prediction was not necessary for stable balance control but did improve stability when control was intermittent, suggesting a possible benefit of a predictor for intermittent activation. Further application of intermittent activation and predictive control models may drive prolonged, stable FES-controlled standing that improves quality of life for people with balance impairments.
NASA Technical Reports Server (NTRS)
Paloski, William H.
2008-01-01
Balance control and locomotor patterns were altered in Apollo crewmembers on the lunar surface, owing, presumably, to a combination of sensory-motor adaptation during transit and lunar surface operations, decreased environmental affordances associated with the reduced gravity, and restricted joint mobility as well as altered center-of-gravity caused by the EVA pressure suits. Dr. Paloski will discuss these factors, as well as the potential human and mission impacts of falls and malcoordination during planned lunar sortie and outpost missions. Learning objectives: What are the potential impacts of postural instabilities on the lunar surface? CME question: What factors affect balance control and gait stability on the moon? Answer: Sensory-motor adaptation to the lunar environment, reduced mechanical and visual affordances, and altered biomechanics caused by the EVA suit.
Wind Tunnel Force Balance Calibration Study - Interim Results
NASA Technical Reports Server (NTRS)
Rhew, Ray D.
2012-01-01
Wind tunnel force balance calibration is preformed utilizing a variety of different methods and does not have a direct traceable standard such as standards used for most calibration practices (weights, and voltmeters). These different calibration methods and practices include, but are not limited to, the loading schedule, the load application hardware, manual and automatic systems, re-leveling and non-re-leveling. A study of the balance calibration techniques used by NASA was undertaken to develop metrics for reviewing and comparing results using sample calibrations. The study also includes balances of different designs, single and multi-piece. The calibration systems include, the manual, and the automatic that are provided by NASA and its vendors. The results to date will be presented along with the techniques for comparing the results. In addition, future planned calibrations and investigations based on the results will be provided.
Heat Load Estimator for Smoothing Pulsed Heat Loads on Supercritical Helium Loops
NASA Astrophysics Data System (ADS)
Hoa, C.; Lagier, B.; Rousset, B.; Bonnay, P.; Michel, F.
Superconducting magnets for fusion are subjected to large variations of heat loads due to cycling operation of tokamaks. The cryogenic system shall operate smoothly to extract the pulsed heat loads by circulating supercritical helium into the coils and structures. However the value of the total heat loads and its temporal variation are not known before the plasma scenario starts. A real-time heat load estimator is of interest for the process control of the cryogenic system in order to anticipate the arrival of pulsed heat loads to the refrigerator and finally to optimize the operation of the cryogenic system. The large variation of the thermal loads affects the physical parameters of the supercritical helium loop (pressure, temperature, mass flow) so those signals can be used for calculating instantaneously the loads deposited into the loop. The methodology and algorithm are addressed in the article for estimating the heat load deposition before it reaches the refrigerator. The CEA patented process control has been implemented in a Programmable Logic Controller (PLC) and has been successfully validated on the HELIOS test facility at CEA Grenoble. This heat load estimator is complementary to pulsed load smoothing strategies providing an estimation of the optimized refrigeration power. It can also effectively improve the process control during the transient between different operating modes by adjusting the refrigeration power to the need. This way, the heat load estimator participates to the safe operation of the cryogenic system.
Ross, C.P.; Beale, P.L.
1994-01-01
The ability to successfully predict lithology and fluid content from reflection seismic records using AVO techniques is contingent upon accurate pre-analysis conditioning of the seismic data. However, all too often, residual amplitude effects remain after the many offset-dependent processing steps are completed. Residual amplitude effects often represent a significant error when compared to the amplitude variation with offset (AVO) response that the authors are attempting to quantify. They propose a model-based, offset-dependent amplitude balancing method that attempts to correct for these residuals and other errors due to sub-optimal processing. Seismic offset balancing attempts to quantify the relationship between the offset response of back-ground seismic reflections and corresponding theoretical predictions for average lithologic interfaces thought to cause these background reflections. It is assumed that any deviation from the theoretical response is a result of residual processing phenomenon and/or suboptimal processing, and a simple offset-dependent scaling function is designed to correct for these differences. This function can then be applied to seismic data over both prospective and nonprospective zones within an area where the theoretical values are appropriate and the seismic characteristics are consistent. A conservative application of the above procedure results in an AVO response over both gas sands and wet sands that is much closer to theoretically expected values. A case history from the Gulf of Mexico Flexure Trend is presented as an example to demonstrate the offset balancing technique.
Masdeu, Joseph C
2016-01-01
This chapter focuses on one of the most common types of neurologic disorders: altered walking. Walking impairment often reflects disease of the neurologic structures mediating gait, balance or, most often, both. These structures are distributed along the neuraxis. For this reason, this chapter is introduced by a brief description of the neurobiologic underpinning of walking, stressing information that is critical for imaging, namely, the anatomic representation of gait and balance mechanisms. This background is essential not only in order to direct the relevant imaging tools to the regions more likely to be affected but also to interpret correctly imaging findings that may not be related to the walking deficit object of clinical study. The chapter closes with a discussion on how to image some of the most frequent etiologies causing gait or balance impairment. However, it focuses on syndromes not already discussed in other chapters of this volume, such as Parkinson's disease and other movement disorders, already discussed in Chapter 48, or cerebellar ataxia, in Chapter 23, in the previous volume. As regards vascular disease, the spastic hemiplegia most characteristic of brain disease needs little discussion, while the less well-understood effects of microvascular disease are extensively reviewed here, together with the imaging approach.
NASA Technical Reports Server (NTRS)
Johnson, Steven D.; Byers, Jerry W.; Martin, James A.
2012-01-01
A method has been developed for continuous cell voltage balancing for rechargeable batteries (e.g. lithium ion batteries). A resistor divider chain is provided that generates a set of voltages representing the ideal cell voltage (the voltage of each cell should be as if the cells were perfectly balanced). An operational amplifier circuit with an added current buffer stage generates the ideal voltage with a very high degree of accuracy, using the concept of negative feedback. The ideal voltages are each connected to the corresponding cell through a current- limiting resistance. Over time, having the cell connected to the ideal voltage provides a balancing current that moves the cell voltage very close to that ideal level. In effect, it adjusts the current of each cell during charging, discharging, and standby periods to force the cell voltages to be equal to the ideal voltages generated by the resistor divider. The device also includes solid-state switches that disconnect the circuit from the battery so that it will not discharge the battery during storage. This solution requires relatively few parts and is, therefore, of lower cost and of increased reliability due to the fewer failure modes. Additionally, this design uses very little power. A preliminary model predicts a power usage of 0.18 W for an 8-cell battery. This approach is applicable to a wide range of battery capacities and voltages.
Carson, N.J. Jr.; Ostrander, H.W.; Munter, C.N.
1964-03-01
A weighing device having a load-supporting vertical shaft buoyed up by mutually repellant magnets is described. The shaft is aligned by an air bearing and has an air gage to sense vertical displacement caused by weights placed on the top end of the shaft. (AEC)
Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement
Wang, Feiyi; Oral, H Sarp; Vazhkudai, Sudharshan S
2014-01-01
With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.
Selenium mass balance in the Great Salt Lake, Utah
Diaz, X.; Johnson, W.P.; Naftz, D.L.
2009-01-01
A mass balance for Se in the south arm of the Great Salt Lake was developed for September 2006 to August 2007 of monitoring for Se loads and removal flows. The combined removal flows (sedimentation and volatilization) totaled to a geometric mean value of 2079??kg Se/yr, with the estimated low value being 1255??kg Se/yr, and an estimated high value of 3143??kg Se/yr at the 68% confidence level. The total (particulates + dissolved) loads (via runoff) were about 1560??kg Se/yr, for which the error is expected to be ?? 15% for the measured loads. Comparison of volatilization to sedimentation flux demonstrates that volatilization rather than sedimentation is likely the major mechanism of selenium removal from the Great Salt Lake. The measured loss flows balance (within the range of uncertainties), and possibly surpass, the measured annual loads. Concentration histories were modeled using a simple mass balance, which indicated that no significant change in Se concentration was expected during the period of study. Surprisingly, the measured total Se concentration increased during the period of the study, indicating that the removal processes operate at their low estimated rates, and/or there are unmeasured selenium loads entering the lake. The selenium concentration trajectories were compared to those of other trace metals to assess the significance of selenium concentration trends. ?? 2008 Elsevier B.V.
Selenium mass balance in the Great Salt Lake, Utah.
Diaz, Ximena; Johnson, William P; Naftz, David L
2009-03-15
A mass balance for Se in the south arm of the Great Salt Lake was developed for September 2006 to August 2007 of monitoring for Se loads and removal flows. The combined removal flows (sedimentation and volatilization) totaled to a geometric mean value of 2079 kg Se/yr, with the estimated low value being 1255 kg Se/yr, and an estimated high value of 3143 kg Se/yr at the 68% confidence level. The total (particulates+dissolved) loads (via runoff) were about 1560 kg Se/yr, for which the error is expected to be +/-15% for the measured loads. Comparison of volatilization to sedimentation flux demonstrates that volatilization rather than sedimentation is likely the major mechanism of selenium removal from the Great Salt Lake. The measured loss flows balance (within the range of uncertainties), and possibly surpass, the measured annual loads. Concentration histories were modeled using a simple mass balance, which indicated that no significant change in Se concentration was expected during the period of study. Surprisingly, the measured total Se concentration increased during the period of the study, indicating that the removal processes operate at their low estimated rates, and/or there are unmeasured selenium loads entering the lake. The selenium concentration trajectories were compared to those of other trace metals to assess the significance of selenium concentration trends.
Automated Loads Analysis System (ATLAS)
NASA Technical Reports Server (NTRS)
Gardner, Stephen; Frere, Scot; O’Reilly, Patrick
2013-01-01
ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.
Rotating Balances Used for Fluid Pump Testing
NASA Technical Reports Server (NTRS)
Skelley, Stephen; Mulder, Andrew
2014-01-01
Marshall Space Flight Center has developed and demonstrated two direct read force and moment balances for sensing and resolving the hydrodynamic loads on rotating fluid machinery. These rotating balances consist of a series of stainless steel flexures instrumented with semiconductor type, unidirectional strain gauges arranged into six bridges, then sealed and waterproofed, for use fully submerged in degassed water at rotational speeds up to six thousand revolutions per minute. The balances are used to measure the forces and moments due to the onset and presence of cavitation or other hydrodynamic phenomena on subscale replicas of rocket engine turbomachinery, principally axial pumps (inducers) designed specifically to operate in a cavitating environment. The balances are inserted into the drive assembly with power to and signal from the sensors routed through the drive shaft and out through an air-cooled twenty-channel slip ring. High frequency data - balance forces and moments as well as extensive, flush-mounted pressures around the rotating component periphery - are acquired via a high-speed analog to digital data acquisition system while the test rig conditions are varied continuously. The data acquisition and correction process is described, including the in-situ verifications that are performed to quantify and correct for known system effects such as mechanical imbalance, "added mass," buoyancy, mechanical resonance, and electrical bias. Examples of four types of cavitation oscillations for two typical inducers are described in the laboratory (pressure) and rotating (force) frames: 1) attached, symmetric cavitation, 2) rotating cavitation, 3) attached, asymmetric cavitation, and 4) cavitation surge. Rotating and asymmetric cavitation generate a corresponding unbalanced radial force on the rotating assembly while cavitation surge generates an axial force. Attached, symmetric cavitation induces no measurable force. The frequency of the forces can be determined a
Improving Balance with Tai Chi
... 8428 · INFO @ VESTIBULAR . ORG · WWW . VESTIBULAR . ORG Improving Balance with Tai Chi By the Vestibular Disorders Association ... symptoms commonly experi- enced with vestibular (inner ear balance) disorders can cause overwhelming fatigue and anxiety. Many ...
Ultra-fast fluence optimization for beam angle selection algorithms
NASA Astrophysics Data System (ADS)
Bangert, M.; Ziegenhein, P.; Oelfke, U.
2014-03-01
Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.
46 CFR 39.4005 - Operational requirements for vapor balancing-TB/ALL.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 1 2014-10-01 2014-10-01 false Operational requirements for vapor balancing-TB/ALL. 39... SYSTEMS Vessel-to-Vessel Transfers Using Vapor Balancing § 39.4005 Operational requirements for vapor balancing—TB/ALL. (a) During a vessel-to-vessel transfer operation, each cargo tank being loaded must...
46 CFR 39.4005 - Operational requirements for vapor balancing-TB/ALL.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 1 2013-10-01 2013-10-01 false Operational requirements for vapor balancing-TB/ALL. 39... SYSTEMS Vessel-to-Vessel Transfers Using Vapor Balancing § 39.4005 Operational requirements for vapor balancing—TB/ALL. (a) During a vessel-to-vessel transfer operation, each cargo tank being loaded must...
A MASS BALANCE OF SURFACE WATER GENOTOXICITY IN PROVIDENCE RIVER (RHODE ISLAND USA)
White and Rasmussen (Mutation Res. 410:223-236) used a mass balance approach to demonstrate that over 85% of the total genotoxic loading to the St. Lawrence River at Montreal is non-industrial. To validate the mass balance approach and investigate the sources of genotoxins in sur...
More on Chemical Reaction Balancing.
ERIC Educational Resources Information Center
Swinehart, D. F.
1985-01-01
A previous article stated that only the matrix method was powerful enough to balance a particular chemical equation. Shows how this equation can be balanced without using the matrix method. The approach taken involves writing partial mathematical reactions and redox half-reactions, and combining them to yield the final balanced reaction. (JN)
Evaluating surface energy balance system (SEBS) using aircraft data collected during BEAREX07
Technology Transfer Automated Retrieval System (TEKTRAN)
Evapotranspiration (ET) is an essential component of the water balance and a major consumptive use of irrigation water and precipitation on cropland. Remote sensing based surface energy balance algorithms are now capable of providing accurate estimates of spatial-temporal ET. Uses of these spatial E...
Ahmed, Alaa A; Ashton-Miller, James A
2004-06-01
Given that a physical definition for a loss of balance (LOB) is lacking, the hypothesis was tested that a LOB is actually a loss of effective control, as evidenced by a control error signal anomaly (CEA). A model-reference adaptive controller and failure-detection algorithm were used to represent central nervous system decision-making based on input and output signals obtained during a challenging whole-body planar balancing task. Control error was defined as the residual generated when the actual system output is compared with the predicted output of the simple first-order polynomial system model. A CEA was hypothesized to occur when the model-generated control error signal exceeded three standard deviations (3sigma) beyond the mean calculated across a 2-s trailing window. The primary hypothesis tested was that a CEA is indeed observable in 20 healthy young adults (ten women) performing the following experiment. Seated subjects were asked to balance a high-backed chair for as long as possible over its rear legs. Each subject performed ten trials. The ground reaction force under the dominant foot, which constituted the sole input to the system, was measured using a two-axis load cell. Angular acceleration of the chair represented the one degree-of-freedom system output. The results showed that the 3sigma algorithm detected a CEA in 94% of 197 trials. A secondary hypothesis was supported in that a CEA was followed in 93% of the trials by an observable compensatory response, occurring at least 100 ms later, and an average of 479 ms, later. Longer reaction times were associated with low velocities at CEA, and vice versa. It is noteworthy that this method of detecting CEA does not rely on an external positional or angular reference, or knowledge of the location of the system's center of mass.
40 CFR 85.2217 - Loaded test-EPA 91.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Loaded test-EPA 91. 85.2217 Section 85.2217 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED....2217 Loaded test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm. The analysis...
40 CFR 85.2217 - Loaded test-EPA 91.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Loaded test-EPA 91. 85.2217 Section 85.2217 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED....2217 Loaded test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm. The analysis...
40 CFR 85.2217 - Loaded test-EPA 91.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Loaded test-EPA 91. 85.2217 Section 85.2217 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED....2217 Loaded test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm. The analysis...
40 CFR 85.2217 - Loaded test-EPA 91.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Loaded test-EPA 91. 85.2217 Section 85.2217 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED....2217 Loaded test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm. The analysis...
Dynamic balance abilities of collegiate men for the bench press.
Piper, Timothy J; Radlo, Steven J; Smith, Thomas J; Woodward, Ryan W
2012-12-01
This study investigated the dynamic balance detection ability of college men for the bench press exercise. Thirty-five college men (mean ± SD: age = 22.4 ± 2.76 years, bench press experience = 8.3 ± 2.79 years, and estimated 1RM = 120.1 ± 21.8 kg) completed 1 repetition of the bench press repetitions for each of 3 bar loading arrangements. In a randomized fashion, subjects performed the bench press with a 20-kg barbell loaded with one of the following: a balanced load, one 20-kg plate on each side; an imbalanced asymmetrical load, one 20-kg plate on one side and a 20-kg plate plus a 1.25-kg plate on the other side; or an imbalanced asymmetrical center of mass, 20-kg plate on one side and sixteen 1.25-kg plates on the other side. Subjects were blindfolded and wore ear protection throughout all testing to decrease the ability to otherwise detect loads. Binomial data analysis indicated that subjects correctly detected the imbalance of the imbalanced asymmetrical center of mass condition (p[correct detection] = 0.89, p < 0.01) but did not correctly detect the balanced condition (p[correct detection] = 0.46, p = 0.74) or the imbalanced asymmetrical condition (p[correct detection] = 0.60, p = 0.31). Although it appears that a substantial shift in the center of mass of plates leads to the detection of barbell imbalance, minor changes of the addition of 1.25 kg (2.5 lb) to the asymmetrical condition did not result in consistent detection. Our data indicate that the establishment of a biofeedback loop capable of determining balance detection was only realized under a high degree of imbalance. Although balance detection was not present in either the even or the slightly uneven loading condition, the inclusion of balance training for upper body may be futile if exercises are unable to establish such a feedback loop and thus eliciting an improvement of balance performance.
NASA Astrophysics Data System (ADS)
Chapanova, V.
2012-04-01
Lesson "Balance in Nature" This simulation game-lesson (Balance in Nature) gives an opportunity for the students to show creativity, work independently, and to create models and ideas. It creates future-oriented thought connected to their experience, allowing them to propose solutions for global problems and personal responsibility for their activities. The class is divided in two teams. Each team chooses questions. 1. Question: Pollution in the environment. 2. Question: Care for nature and climate. The teams work on the chosen tasks. They make drafts, notes and formulate their solutions on small pieces of paper, explaining the impact on nature and society. They express their points of view using many different opinions. This generates alternative thoughts and results in creative solutions. With the new knowledge and positive behaviour defined, everybody realizes that they can do something positive towards nature and climate problems and the importance of individuals for solving global problems is evident. Our main goal is to recover the ecological balance, and everybody explains his or her own well-grounded opinions. In this work process the students obtain knowledge, skills and more responsible behaviour. This process, based on his or her own experience, dialogue and teamwork, helps the participant's self-development. Making the model "human↔ nature" expresses how human activities impact the natural Earth and how these impacts in turn affect society. Taking personal responsibility, we can reduce global warming and help the Earth. By helping nature we help ourselves. Teacher: Veselina Boycheva-Chapanova " Saint Patriarch Evtimii" Scholl Str. "Ivan Vazov"-19 Plovdiv Bulgaria
Dispatchable hydrogen production at the forecourt for electricity grid balancing
NASA Astrophysics Data System (ADS)
Rahil, Abdulla; Gammon, Rupert; Brown, Neil
2017-02-01
The rapid growth of renewable energy (RE) generation and its integration into electricity grids has been motivated by environmental issues and the depletion of fossil fuels. For the same reasons, an alternative to hydrocarbon fuels is needed for vehicles; hence the anticipated uptake of electric and fuel cell vehicles. High penetrations of RE generators with variable and intermittent output threaten to destabilise electricity networks by reducing the ability to balance electricity supply and demand. This can be greatly mitigated by the use of energy storage and demand-side response (DSR) techniques. Hydrogen production by electrolysis is a promising option for providing DSR as well as an emission-free vehicle fuel. Tariff structures can be used to incentivise the operating of electrolysers as controllable (dispatchable) loads. This paper compares the cost of hydrogen production by electrolysis at garage forecourts under both dispatchable and continuous operation, while ensuring no interruption of fuel supply to fuel cell vehicles. An optimisation algorithm is applied to investigate a hydrogen refueling station in both dispatchable and continuous operation. Three scenarios are tested to see whether a reduced off-peak electricity price could lower the cost of electrolytic hydrogen. These scenarios are: 1) "Standard Continuous", where the electrolyser is operated continuously on a standard all-day tariff of 12p/kWh; 2) "Off-peak Only", where it runs only during off-peak periods in a 2-tier tariff system at the lower price of 5p/kWh; and 3) "2-Tier Continuous", operating continuously and paying a low tariff at off- peak times and a high tariff at other times. This study uses the Libyan coastal city of Derna as a case study. The cheapest electricity cost per kg of hydrogen produced was £2.8, which occurred in Scenario 2. The next cheapest, at £5.8 - £6.3, was in Scenario 3, and the most expensive was £6.8/kg in Scenario 1.
Micromechanical Oscillating Mass Balance
NASA Technical Reports Server (NTRS)
Altemir, David A. (Inventor)
1997-01-01
A micromechanical oscillating mass balance and method adapted for measuring minute quantities of material deposited at a selected location, such as during a vapor deposition process. The invention comprises a vibratory composite beam which includes a dielectric layer sandwiched between two conductive layers. The beam is positioned in a magnetic field. An alternating current passes through one conductive layers, the beam oscillates, inducing an output current in the second conductive layer, which is analyzed to determine the resonant frequency of the beam. As material is deposited on the beam, the mass of the beam increases and the resonant frequency of the beam shifts, and the mass added is determined.
Citraturic response to oral citric acid load
NASA Technical Reports Server (NTRS)
Sakhaee, K.; Alpern, R.; Poindexter, J.; Pak, C. Y.
1992-01-01
It is possible that some orally administered citrate may appear in urine by escaping oxidation in vivo. To determine whether this mechanism contributes to the citraturic response to potassium citrate, we measured serum and urinary citrate for 4 hours after a single oral load of citric acid (40 mEq.) in 6 normal subjects. Since citric acid does not alter acid-base balance, the effect of absorbed citrate could be isolated from that of alkali load. Serum citrate concentration increased significantly (p less than 0.05) 30 minutes after a single oral dose of citric acid and remained significantly elevated for 3 hours after citric acid load. Commensurate with this change, urinary citrate excretion peaked at 2 hours and gradually decreased during the next 2 hours after citric acid load. In contrast, serum and urinary citrate remained unaltered following the control load (no drug). Differences of the citratemic and citraturic effects between phases were significant (p less than 0.05) at 2 and 3 hours. Urinary pH, carbon dioxide pressure, bicarbonate, total carbon dioxide and ammonium did not change at any time after citric acid load, and did not differ between the 2 phases. No significant difference was noted in serum electrolytes, arterialized venous pH and carbon dioxide pressure at any time after citric acid load and between the 2 phases. Thus, the citraturic and citratemic effects of oral citric acid are largely accountable by provision of absorbed citrate, which has escaped in vivo degradation.
Geochemical mole-balance modeling with uncertain data
Parkhurst, D.L.
1997-01-01
Geochemical mole-balance models are sets of chemical reactions that quantitatively account for changes in the chemical and isotopic composition of water along a flow path. A revised mole-balance formulation that includes an uncertainty term for each chemical and isotopic datum is derived. The revised formulation is comprised of mole-balance equations for each element or element redox state, alkalinity, electrons, solvent water, and each isotope; a charge-balance equation and an equation that relates the uncertainty terms for pH, alkalinity, and total dissolved inorganic carbon for each aqueous solution: inequality constraints on the size of the uncertainty terms: and inequality constraints on the sign of the mole transfer of reactants. The equations and inequality constraints are solved by a modification of the simplex algorithm combined with an exhaustive search for unique combinations of aqueous solutions and reactants for which the equations and inequality constraints can be solved and the uncertainty terms minimized. Additional algorithms find only the simplest mole-balance models and determine the ranges of mixing fractions for each solution and mole transfers for each reactant that are consistent with specified limits on the uncertainty terms. The revised formulation produces simpler and more robust mole-balance models and allows the significance of mixing fractions and mole transfers to be evaluated. In an example from the central Oklahoma aquifer, inclusion of up to 5% uncertainty in the chemical data can reduce the number of reactants in mole-balance models from seven or more to as few as three, these being cation exchange, dolomite dissolution, and silica precipitation. In another example from the Madison aquifer; inclusion of the charge-balance constraint requires significant increases in the mole transfers of calcite, dolomite, and organic matter, which reduce the estimated maximum carbon 14 age of the sample by about 10,000 years, from 22,700 years to
Balance ability and athletic performance.
Hrysomallis, Con
2011-03-01
The relationship between balance ability and sport injury risk has been established in many cases, but the relationship between balance ability and athletic performance is less clear. This review compares the balance ability of athletes from different sports, determines if there is a difference in balance ability of athletes at different levels of competition within the same sport, determines the relationship of balance ability with performance measures and examines the influence of balance training on sport performance or motor skills. Based on the available data from cross-sectional studies, gymnasts tended to have the best balance ability, followed by soccer players, swimmers, active control subjects and then basketball players. Surprisingly, no studies were found that compared the balance ability of rifle shooters with other athletes. There were some sports, such as rifle shooting, soccer and golf, where elite athletes were found to have superior balance ability compared with their less proficient counterparts, but this was not found to be the case for alpine skiing, surfing and judo. Balance ability was shown to be significantly related to rifle shooting accuracy, archery shooting accuracy, ice hockey maximum skating speed and simulated luge start speed, but not for baseball pitching accuracy or snowboarding ranking points. Prospective studies have shown that the addition of a balance training component to the activities of recreationally active subjects or physical education students has resulted in improvements in vertical jump, agility, shuttle run and downhill slalom skiing. A proposed mechanism for the enhancement in motor skills from balance training is an increase in the rate of force development. There are limited data on the influence of balance training on motor skills of elite athletes. When the effectiveness of balance training was compared with resistance training, it was found that resistance training produced superior performance results for
"Postural first" principle when balance is challenged in elderly people.
Lion, Alexis; Spada, Rosario S; Bosser, Gilles; Gauchard, Gérome C; Anello, Guido; Bosco, Paolo; Calabrese, Santa; Iero, Antonella; Stella, Giuseppe; Elia, Maurizio; Perrin, Philippe P
2014-08-01
Human cognitive processing limits can lead to difficulties in performing two tasks simultaneously. This study aimed to evaluate the effect of cognitive load on both simple and complex postural tasks. Postural control was evaluated in 128 noninstitutionalized elderly people (mean age = 73.6 ± 5.6 years) using a force platform on a firm support in control condition (CC) and mental counting condition (MCC) with eyes open (EO) and eyes closed (EC). Then, the same tests were performed on a foam support. Sway path traveled and area covered by the center of foot pressure were recorded, low values indicating efficient balance. On firm support, sway path was higher in MCC than in CC both in EO and EC conditions (p < 0.001). On foam support, sway path was higher in CC than in MCC in EC condition (p < 0.001), area being higher in CC than in MCC both in EO (p < 0.05) and EC (p < 0.001) conditions. The results indicate that cognitive load alters balance control in a simple postural task (i.e. on firm support), which is highlighted by an increase of energetic expenditure (i.e. increase of the sway path covered) to balance. Awareness may not be increased and the attentional demand may be shared between balance and mental task. Conversely, cognitive load does not perturb the realization of a new complex postural task. This result showed that postural control is prioritized ("postural first" principle) when seriously challenged.
Effects of Deployment on Musculoskeletal and Physiological Characteristics and Balance.
Nagai, Takashi; Abt, John P; Sell, Timothy C; Keenan, Karen A; McGrail, Mark A; Smalley, Brian W; Lephart, Scott M
2016-09-01
Despite many nonbattle injuries reported during deployment, few studies have been conducted to evaluate the effects of deployment on musculoskeletal and physiological characteristics and balance. A total of 35 active duty U.S. Army Soldiers participated in laboratory testing before and after deployment to Afghanistan. The following measures were obtained for each Soldier: shoulder, trunk, hip, knee, and ankle strength and range of motion (ROM), balance, body composition, aerobic capacity, and anaerobic power/capacity. Additionally, Soldiers were asked about their physical activity and load carriage. Paired t tests or Wilcoxon tests with an α = 0.05 set a priori were used for statistical analyses. Shoulder external rotation ROM, torso rotation ROM, ankle dorsiflexion ROM, torso rotation strength, and anaerobic power significantly increased following deployment (p < 0.05). Shoulder extension ROM, shoulder external rotation strength, and eyes-closed balance (p < 0.05) were significantly worse following deployment. The majority of Soldiers (85%) engaged in physical activity. In addition, 58% of Soldiers reported regularly carrying a load (22 kg average). The deployment-related changes in musculoskeletal and physiological characteristics and balance as well as physical activity and load carriage during deployment may assist with proper preparation with the intent to optimize tactical readiness and mitigate injury risk.
Development of a 5-Component Balance for Water Tunnel Applications
NASA Technical Reports Server (NTRS)
Suarez, Carlos J.; Kramer, Brian R.; Smith, Brooke C.
1999-01-01
The principal objective of this research/development effort was to develop a multi-component strain gage balance to measure both static and dynamic forces and moments on models tested in flow visualization water tunnels. A balance was designed that allows measuring normal and side forces, and pitching, yawing and rolling moments (no axial force). The balance mounts internally in the model and is used in a manner typical of wind tunnel balances. The key differences between a water tunnel balance and a wind tunnel balance are the requirement for very high sensitivity since the loads are very low (typical normal force is 90 grams or 0.2 lbs), the need for water proofing the gage elements, and the small size required to fit into typical water tunnel models. The five-component balance was calibrated and demonstrated linearity in the responses of the primary components to applied loads, very low interactions between the sections and no hysteresis. Static experiments were conducted in the Eidetics water tunnel with delta wings and F/A-18 models. The data were compared to forces and moments from wind tunnel tests of the same or similar configurations. The comparison showed very good agreement, providing confidence that loads can be measured accurately in the water tunnel with a relatively simple multi-component internal balance. The success of the static experiments encouraged the use of the balance for dynamic experiments. Among the advantages of conducting dynamic tests in a water tunnel are less demanding motion and data acquisition rates than in a wind tunnel test (because of the low-speed flow) and the capability of performing flow visualization and force/moment (F/M) measurements simultaneously with relative simplicity. This capability of simultaneous flow visualization and for F/M measurements proved extremely useful to explain the results obtained during these dynamic tests. In general, the development of this balance should encourage the use of water tunnels for a
Novel biomedical tetrahedral mesh methods: algorithms and applications
NASA Astrophysics Data System (ADS)
Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu
2007-12-01
Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.
Micropollutant loads in the urban water cycle.
Musolff, Andreas; Leschik, Sebastian; Reinstorf, Frido; Strauch, Gerhard; Schirmer, Mario
2010-07-01
The assessment of micropollutants in the urban aquatic environment is a challenging task since both the water balance and the contaminant concentrations are characterized by a pronounced variability in time and space. In this study the water balance of a central European urban drainage catchment is quantified for a period of one year. On the basis of a concentration monitoring of several micropollutants, a contaminant mass balance for the study area's wastewater, surface water, and groundwater is derived. The release of micropollutants from the catchment was mainly driven by the discharge of the wastewater treatment plant. However, combined sewer overflows (CSO) released significant loads of caffeine, bisphenol A, and technical 4-nonylphenol. Since an estimated fraction of 9.9-13.0% of the wastewater's dry weather flow was lost as sewer leakages to the groundwater, considerable loads of bisphenol A and technical 4-nonylphenol were also released by the groundwater pathway. The different temporal dynamics of release loads by CSO as an intermittent source and groundwater as well as treated wastewater as continuous pathways may induce acute as well as chronic effects on the receiving aquatic ecosystem. This study points out the importance of the pollution pathway CSO and groundwater for the contamination assessments of urban water resources.
Loading relativistic Maxwell distributions in particle simulations
NASA Astrophysics Data System (ADS)
Zenitani, S.
2015-12-01
In order to study energetic plasma phenomena by using particle-in-cell (PIC) and Monte-Carlo simulations, we need to deal with relativistic velocity distributions in these simulations. However, numerical algorithms to deal with relativistic distributions are not well known. In this contribution, we overview basic algorithms to load relativistic Maxwell distributions in PIC and Monte-Carlo simulations. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are newly proposed in a physically transparent manner. Their acceptance efficiencies are 50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Appliance Commitment for Household Load Scheduling
Du, Pengwei; Lu, Ning
2011-06-30
This paper presents a novel appliance commitment algorithm that schedules thermostatically-controlled household loads based on price and consumption forecasts considering users comfort settings to meet an optimization objective such as minimum payment or maximum comfort. The formulation of an appliance commitment problem was described in the paper using an electrical water heater load as an example. The thermal dynamics of heating and coasting of the water heater load was modeled by physical models; random hot water consumption was modeled with statistical methods. The models were used to predict the appliance operation over the scheduling time horizon. User comfort was transformed to a set of linear constraints. Then, a novel linear, sequential, optimization process was used to solve the appliance commitment problem. The simulation results demonstrate that the algorithm is fast, robust, and flexible. The algorithm can be used in home/building energy-management systems to help household owners or building managers to automatically create optimal load operation schedules based on different cost and comfort settings and compare cost/benefits among schedules.
Ghosal, Dipak; Mueller, Stephen Ng
2005-04-01
With multipath routing in mobile ad hoc networks (MANETs), a source can establish multiple routes to a destination for routing data. In MANETs, mulitpath routing can be used to provide route resilience, smaller end-to-end delay, and better load balancing. However, when the multiple paths are close together, transmissions of different paths may interfere with each other, causing degradation in performance. Besides interference, the physical diversity of paths also improves fault tolerance. We present a purely distributed multipath protocol based on the AODV-Multipath (AODVM) protocol called AODVM with Path Diversity (AODVM/PD) that finds multiple paths with a desired degree of correlation between paths specified as an input parameter to the algorithm. We demonstrate through detailed simulation analysis that multiple paths with low degree of correlation determined by AODVM/PD provides both smaller end-to-end delay than AODVM in networks with low mobility and better route resilience in the presence of correlated node failures.
Feed-forward volume rendering algorithm for moderately parallel MIMD machines
NASA Technical Reports Server (NTRS)
Yagel, Roni
1993-01-01
Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.