Load-balancing algorithms for climate models
Foster, I.T.; Toonen, B.R.
1994-06-01
Implementations of climate models on scalable parallel computer systems can suffer from load imbalances due to temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we de scribe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the community Climate Model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers.
Adaptive Load-Balancing Algorithms Using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
In a distributed-computing environment, it is important to ensure that the processor workloads are adequately balanced. Among numerous load-balancing algorithms, a unique approach due to Dam and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three novel SBN-based load-balancing algorithms, and implement them on an SP2. A thorough experimental study with Poisson-distributed synthetic loads demonstrates that these algorithms are very effective in balancing system load while minimizing processor idle time. They also compare favorably with several other existing load-balancing techniques. Additional experiments performed with real data demonstrate that the SBN approach is effective in adaptive computational science and engineering applications where dynamic load balancing is extremely crucial.
Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.
Fast Optimal Load Balancing Algorithms for 1D Partitioning
Pinar, Ali; Aykanat, Cevdet
2002-12-09
One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.
Dynamic load balance scheme for the DSMC algorithm
NASA Astrophysics Data System (ADS)
Li, Jin; Geng, Xiangren; Jiang, Dingwu; Chen, Jianqiang
2014-12-01
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, the total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.
Dynamic load balance scheme for the DSMC algorithm
Li, Jin; Geng, Xiangren; Jiang, Dingwu; Chen, Jianqiang
2014-12-09
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, the total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
A load-balance path selection algorithm in automatically swiched optical network (ASON)
NASA Astrophysics Data System (ADS)
Gao, Fei; Lu, Yueming; Ji, Yuefeng
2007-11-01
In this paper, a novel load-balance algorithm is proposed to provide an approach to optimized path selection in automatically swiched optical network (ASON). By using this algorithm, improved survivability and low congestion can be achieved. The static nature of current routing algorithms, such as OSPF or IS-IS, has made the situation worse since the traffic is concentrated on the "least-cost" paths which causes the congestion for some links while leaving other links lightly loaded. So, the key is to select suitable paths to balance the network load to optimize network resource utilization and traffic performance. We present a method to provide the capability to control traffic engineering so that the carriers can define their own strategies for optimizations and apply them to path selection for dynamic load balancing. With considering load distribution and topology information, capacity utilization factor is introduced into Dijkstra (shortest path selection) for path selection to achieve balancing traffic over network. Routing simulations have been done over mesh networks to compare the two different algorithms. With the simulation results, a conclusion can be made on the performance of different algorithms.
A parallel dynamic load balancing algorithm for 3-D adaptive unstructured grids
NASA Technical Reports Server (NTRS)
Vidwans, A.; Kallinderis, Y.; Venkatakrishnan, V.
1993-01-01
Adaptive local grid refinement and coarsening results in unequal distribution of workload among the processors of a parallel system. A novel method for balancing the load in cases of dynamically changing tetrahedral grids is developed. The approach employs local exchange of cells among processors in order to redistribute the load equally. An important part of the load balancing algorithm is the method employed by a processor to determine which cells within its subdomain are to be exchanged. Two such methods are presented and compared. The strategy for load balancing is based on the Divide-and-Conquer approach which leads to an efficient parallel algorithm. This method is implemented on a distributed-memory MIMD system.
Logeswaran, Rajasvaran; Chen, Li-Choo
2008-12-01
Service architectures are necessary for providing value-added services in telecommunications networks, including those in medical institutions. Separation of service logic and control from the actual call switching is the main idea of these service architectures, examples include Intelligent Network (IN), Telecommunications Information Network Architectures (TINA), and Open Service Access (OSA). In the Distributed Service Architectures (DSA), instances of the same object type can be placed on different physical nodes. Hence, the network performance can be enhanced by introducing load balancing algorithms to efficiently distribute the traffic between object instances, such that the overall throughput and network performance can be optimised. In this paper, we propose a new load balancing algorithm called "Node Status Algorithm" for DSA infrastructure applicable to electronic-based medical institutions. The simulation results illustrate that this proposed algorithm is able to outperform the benchmark load balancing algorithms-Random Algorithm and Shortest Queue Algorithm, especially under medium and heavily loaded network conditions, which are typical of the increasing bandwidth utilization and processing requirements at paperless hospitals and in the telemedicine environment.
Load Balancing Scientific Applications
Pearce, Olga Tkachyshyn
2014-12-01
The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one at the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.
A Multiconstrained Grid Scheduling Algorithm with Load Balancing and Fault Tolerance.
Keerthika, P; Suresh, P
2015-01-01
Grid environment consists of millions of dynamic and heterogeneous resources. A grid environment which deals with computing resources is computational grid and is meant for applications that involve larger computations. A scheduling algorithm is said to be efficient if and only if it performs better resource allocation even in case of resource failure. Allocation of resources is a tedious issue since it has to consider several requirements such as system load, processing cost and time, user's deadline, and resource failure. This work attempts to design a resource allocation algorithm which is budget constrained and also targets load balancing, fault tolerance, and user satisfaction by considering the above requirements. The proposed Multiconstrained Load Balancing Fault Tolerant algorithm (MLFT) reduces the schedule makespan, schedule cost, and task failure rate and improves resource utilization. The proposed MLFT algorithm is evaluated using Gridsim toolkit and the results are compared with the recent algorithms which separately concentrate on all these factors. The comparison results ensure that the proposed algorithm works better than its counterparts. PMID:26161438
A Multiconstrained Grid Scheduling Algorithm with Load Balancing and Fault Tolerance
Keerthika, P.; Suresh, P.
2015-01-01
Grid environment consists of millions of dynamic and heterogeneous resources. A grid environment which deals with computing resources is computational grid and is meant for applications that involve larger computations. A scheduling algorithm is said to be efficient if and only if it performs better resource allocation even in case of resource failure. Allocation of resources is a tedious issue since it has to consider several requirements such as system load, processing cost and time, user's deadline, and resource failure. This work attempts to design a resource allocation algorithm which is budget constrained and also targets load balancing, fault tolerance, and user satisfaction by considering the above requirements. The proposed Multiconstrained Load Balancing Fault Tolerant algorithm (MLFT) reduces the schedule makespan, schedule cost, and task failure rate and improves resource utilization. The proposed MLFT algorithm is evaluated using Gridsim toolkit and the results are compared with the recent algorithms which separately concentrate on all these factors. The comparison results ensure that the proposed algorithm works better than its counterparts. PMID:26161438
A Multiconstrained Grid Scheduling Algorithm with Load Balancing and Fault Tolerance.
Keerthika, P; Suresh, P
2015-01-01
Grid environment consists of millions of dynamic and heterogeneous resources. A grid environment which deals with computing resources is computational grid and is meant for applications that involve larger computations. A scheduling algorithm is said to be efficient if and only if it performs better resource allocation even in case of resource failure. Allocation of resources is a tedious issue since it has to consider several requirements such as system load, processing cost and time, user's deadline, and resource failure. This work attempts to design a resource allocation algorithm which is budget constrained and also targets load balancing, fault tolerance, and user satisfaction by considering the above requirements. The proposed Multiconstrained Load Balancing Fault Tolerant algorithm (MLFT) reduces the schedule makespan, schedule cost, and task failure rate and improves resource utilization. The proposed MLFT algorithm is evaluated using Gridsim toolkit and the results are compared with the recent algorithms which separately concentrate on all these factors. The comparison results ensure that the proposed algorithm works better than its counterparts.
A novel load-balanced fixed routing (LBFR) algorithm for wavelength routed optical networks
NASA Astrophysics Data System (ADS)
Shen, Gangxiang; Li, Yongcheng; Peng, Limei
2011-11-01
In the wavelength-routed optical transport networks, fixed shortest path routing is one of major lightpath service provisioning strategies, which shows simplicity in network control and operation. Specifically, once a shortest route is found for a node pair, the route is always used for any future lightpath service provisioning, which therefore does not require network control and management system to maintain any active network-wide link state database. On the other hand, the fixed shortest path routing strategy suffers from the disadvantage of unbalanced network traffic load distribution and network congestion because it keeps on employing the same fixed shortest route between each pair of nodes. To avoid the network congestion and meanwhile retain the operational simplicity, in this study we develop a Load-Balanced Fixed Routing (LBFR) algorithm. Through a training process based on a forecasted network traffic load matrix, the proposed algorithm finds a fixed (or few) route(s) for each node pair and employs the fixed route(s) for lightpath service provisioning. Different from the fixed shortest path routes between node pairs, these routes can well balance traffic load within the network when they are used for lightpath service provisioning. Compared to the traditional fixed shortest path routing algorithm, the LBFR algorithm can achieve much better lightpath blocking performance according to our simulation and analytical studies. Moreover, the performance improvement is more significant with the increase of network nodal degree.
A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Arora, Manish; Das, Sajal K.; Biswas, Rupak
2002-01-01
In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.
Multidimensional spectral load balancing
Hendrickson, B.; Leland, R.
1993-01-01
We describe an algorithm for the static load balancing of scientific computations that generalizes and improves upon spectral bisection. Through a novel use of multiple eigenvectors, our new spectral algorithm can divide a computation into 4 or 8 pieces at once. These multidimensional spectral partitioning algorithms generate balanced partitions that have lower communication overhead and are less expensive to compute than those produced by spectral bisection. In addition, they automatically work to minimize message contention on a hypercube or mesh architecture. These spectral partitions are further improved by a multidimensional generalization of the Kernighan-Lin graph partitioning algorithm. Results on several computational grids are given and compared with other popular methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng
2006-09-01
The paper considers the problem of establishing robust routes for multi-granularity connection requests in traffic-grooming WDM mesh networks and proposes a novel Valiant Load-Balanced robust routing scheme for the hose uncertain model. Our objective is to minimize the total network cost when assuring robust routing for all possible multi-granularity connection requests under the hose model. Since the optimization problem is recently shown to be NP-hard, two heuristic algorithms are proposed and compared. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimal hop first) is proposed. We evaluate MHF by Valiant Load-Balanced robust routing with the traditional traffic-grooming algorithm by computer simulation.
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
Multidimensional spectral load balancing
Hendrickson, Bruce A.; Leland, Robert W.
1996-12-24
A method of and apparatus for graph partitioning involving the use of a plurality of eigenvectors of the Laplacian matrix of the graph of the problem for which load balancing is desired. The invention is particularly useful for optimizing parallel computer processing of a problem and for minimizing total pathway lengths of integrated circuits in the design stage.
Control Allocation with Load Balancing
NASA Technical Reports Server (NTRS)
Bodson, Marc; Frost, Susan A.
2009-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.
Dynamic load balancing of applications
Wheat, Stephen R.
1997-01-01
An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated.
Dynamic load balancing of applications
Wheat, S.R.
1997-05-13
An application-level method for dynamically maintaining global load balance on a parallel computer, particularly on massively parallel MIMD computers is disclosed. Global load balancing is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method supports a large class of finite element and finite difference based applications and provides an automatic element management system to which applications are easily integrated. 13 figs.
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
14 CFR 23.421 - Balancing loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Balancing loads. 23.421 Section 23.421... Balancing Surfaces § 23.421 Balancing loads. (a) A horizontal surface balancing load is a load necessary to... balancing surfaces must be designed for the balancing loads occurring at any point on the limit...
Isorropia Partitioning and Load Balancing Package
2006-09-01
Isorropia is a partitioning and load balancing package which interfaces with the Zoltan library. Isorropia can accept input objects such as matrices and matrix-graphs, and repartition/redistribute them into a better data distribution on parallel computers. Isorropia is primarily an interface package, utilizing graph and hypergraph partitioning algorithms that are in the Zoltan library which is a third-party library to Tilinos.
Static load balancing for CFD distributed simulations
Chronopoulos, A T; Grosu, D; Wissink, A; Benche, M
2001-01-26
The cost/performance ratio of networks of workstations has been constantly improving. This trend is expected to continue in the near future. The aggregate peak rate of such systems often matches or exceeds the peak rate offered by the fastest parallel computers. This has motivated research towards using a network of computers, interconnected via a fast network (cluster system) or a simple Local Area Network (LAN) (distributed system), for high performance concurrent computations. Some of the important research issues arise such as (1) Optimal problem partitioning and virtual interconnection topology mapping; (2) Optimal execution scheduling and load balancing. CFD codes have been efficiently implemented on homogeneous parallel systems in the past. In particular, the helicopter aerodynamics CFD code TURNS has been implemented with MPI on the IBM SP with parallel relaxation and Krylov iterative methods used in place of more traditional recursive algorithms to enhance performance. In this implementation the space domain is divided into equal subdomain which are mapped to the processors. We consider the implementation of TURNS on a LAN of heterogeneous workstations. In order to deal with the problem of load balancing due to the different processor speeds we propose a suboptimal algorithm of dividing the space domain into unequal subdomains and assign them to the different computers. The algorithm can apply to other CFD applications. We used our algorithm to schedule TURNS on a network of workstations and obtained significantly better results.
Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2014-01-01
An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
A novel strategy for load balancing of distributed medical applications.
Logeswaran, Rajasvaran; Chen, Li-Choo
2012-04-01
Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.
Improving load balance with flexibly assignable tasks
Pinar, Ali; Hendrickson, Bruce
2003-09-09
In many applications of parallel computing, distribution ofthe data unambiguously implies distribution of work among processors. Butthere are exceptions where some tasks can be assigned to one of severalprocessors without altering the total volume of communication. In thispaper, we study the problem of exploiting this flexibility in assignmentof tasks to improve load balance. We first model the problem in terms ofnetwork flow and use combinatorial techniques for its solution. Ourparametric search algorithms use maximum flow algorithms for probing on acandidate optimal solution value. We describe two algorithms to solve theassignment problem with \\logW_T and vbar P vbar probe calls, w here W_Tand vbar P vbar, respectively, denote the total workload and number ofproce ssors. We also define augmenting paths and cuts for this problem,and show that anyalgorithm based on augmenting paths can be used to findan optimal solution for the task assignment problem. We then consideracontinuous version of the problem, and formulate it as a linearlyconstrained optimization problem, i.e., \\min\\|Ax\\|_\\infty,\\; {\\rms.t.}\\;Bx=d. To avoid solving an intractable \\infty-norm optimization problem,we show that in this case minimizing the 2-norm is sufficient to minimizethe \\infty-norm, which reduces the problem to the well-studiedlinearly-constrained least squares problem. The continuous version of theproblem has the advantage of being easily amenable to parallelization.Our experiments with molecular dynamics and overlapped domaindecomposition applications proved the effectiveness of our methods withsignificant improvements in load balance. We also discuss how ourtechniques can be enhanced for heterogeneous systems.
An Evaluation of the HVAC Load Potential for Providing Load Balancing Service
Lu, Ning
2012-09-30
This paper investigates the potential of providing aggregated intra-hour load balancing services using heating, ventilating, and air-conditioning (HVAC) systems. A direct-load control algorithm is presented. A temperature-priority-list method is used to dispatch the HVAC loads optimally to maintain consumer-desired indoor temperatures and load diversity. Realistic intra-hour load balancing signals were used to evaluate the operational characteristics of the HVAC load under different outdoor temperature profiles and different indoor temperature settings. The number of HVAC units needed is also investigated. Modeling results suggest that the number of HVACs needed to provide a {+-}1-MW load balancing service 24 hours a day varies significantly with baseline settings, high and low temperature settings, and the outdoor temperatures. The results demonstrate that the intra-hour load balancing service provided by HVAC loads meet the performance requirements and can become a major source of revenue for load-serving entities where the smart grid infrastructure enables direct load control over the HAVC loads.
Internet traffic load balancing using dynamic hashing with flow volume
NASA Astrophysics Data System (ADS)
Jo, Ju-Yeon; Kim, Yoohwan; Chao, H. Jonathan; Merat, Francis L.
2002-07-01
Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.
A comparative analysis of static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf; Bokhari, Shahid H.; Saltz, Joel H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but suboptimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the three strategies.
Performance tradeoffs in static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrence Livermore National Laboratory. (authors)
Valiant load-balanced robust routing under hose model for WDM mesh networks
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng
2006-09-01
In this paper, we propose Valiant Load-Balanced robust routing scheme for WDM mesh networks under the model of polyhedral uncertainty (i.e., hose model), and the proposed routing scheme is implemented with traffic grooming approach. Our Objective is to maximize the hose model throughput. A mathematic formulation of Valiant Load-Balanced robust routing is presented and three fast heuristic algorithms are also proposed. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimizing hop first) is proposed. We compare the three heuristic algorithms with the VPN tree under the hose model. Finally we demonstrate in the simulation results that MHF with Valiant Load-Balanced robust routing scheme outperforms the traditional traffic-grooming algorithm in terms of the throughput for the uniform/non-uniform traffic matrix under the hose model.
Scalable load-balance measurement for SPMD codes
Gamblin, T; de Supinski, B R; Schulz, M; Fowler, R; Reed, D
2008-08-05
Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transform and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.
Parallel tetrahedral mesh adaptation with dynamic load balancing
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
2000-06-28
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is the most accurate widely used Quantum Monte Carlo method for the electronic structure of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency and avoid accumulation of systematic errors on parallel machines. The load balancing step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL showing up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that requires load balancing.
Load balancing fictions, falsehoods and fallacies
HENDRICKSON,BRUCE A.
2000-05-30
Effective use of a parallel computer requires that a calculation be carefully divided among the processors. This load balancing problem appears in many guises and has been a fervent area of research for the past decade or more. Although great progress has been made, and useful software tools developed, a number of challenges remain. It is the conviction of the author that these challenges will be easier to address if programmers first come to terms with some significant shortcomings in their current perspectives. This paper tries to identify several areas in which the prevailing point of view is either mistaken or insufficient. The goal is to motivate new ideas and directions for this important field.
Performance Analysis and Portability of the PLUM Load Balancing System
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1998-01-01
The ability to dynamically adapt an unstructured mesh is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive numerical computations in a message-passing environment. PLUM requires that all data be globally redistributed after each mesh adaption to achieve load balance. We present an algorithm for minimizing this remapping overhead by guaranteeing an optimal processor reassignment. We also show that the data redistribution cost can be significantly reduced by applying our heuristic processor reassignment algorithm to the default mapping of the parallel partitioner. Portability is examined by comparing performance on a SP2, an Origin2000, and a T3E. Results show that PLUM can be successfully ported to different platforms without any code modifications.
Energy-balanced algorithm for RFID estimation
NASA Astrophysics Data System (ADS)
Zhao, Jumin; Wang, Fangyuan; Li, Dengao; Yan, Lijuan
2016-10-01
RFID has been widely used in various commercial applications, ranging from inventory control, supply chain management to object tracking. It is necessary for us to estimate the number of RFID tags deployed in a large area periodically and automatically. Most of the prior works use passive tags to estimate and focus on designing time-efficient algorithms that can estimate tens of thousands of tags in seconds. But for a RFID reader to access tags in a large area, active tags are likely to be used due to their longer operational ranges. But these tags use their own battery as energy supplier. Hence, conserving energy for active tags becomes critical. Some prior works have studied how to reduce energy expenditure of a RFID reader when it reads tags IDs. In this paper, we study how to reduce the amount of energy consumed by active tags during the process of estimating the number of tags in a system and make the energy every tag consumed balanced approximately. We design energy-balanced estimation algorithm that can achieve our goal we mentioned above.
Dynamic load balancing for petascale quantum Monte Carlo applications: The Alias method
NASA Astrophysics Data System (ADS)
Sudheer, C. D.; Krishnan, S.; Srinivasan, A.; Kent, P. R. C.
2013-02-01
Diffusion Monte Carlo is a highly accurate Quantum Monte Carlo method for electronic structure calculations of materials, but it requires frequent load balancing or population redistribution steps to maintain efficiency on parallel machines. This step can be a significant factor affecting performance, and will become more important as the number of processing elements increases. We propose a new dynamic load balancing algorithm, the Alias Method, and evaluate it theoretically and empirically. An important feature of the new algorithm is that the load can be perfectly balanced with each process receiving at most one message. It is also optimal in the maximum size of messages received by any process. We also optimize its implementation to reduce network contention, a process facilitated by the low messaging requirement of the algorithm: a simple renumbering of the MPI ranks based on proximity and a space filling curve significantly improves the MPI Allgather performance. Empirical results on the petaflop Cray XT Jaguar supercomputer at ORNL show up to 30% improvement in performance on 120,000 cores. The load balancing algorithm may be straightforwardly implemented in existing codes. The algorithm may also be employed by any method with many near identical computational tasks that require load balancing.
Parallel Processing of Adaptive Meshes with Load Balancing
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.
Exploiting Flexibly Assignable Work to Improve Load Balance
Pinar, Ali; Hendrickson, Bruce
2002-12-09
In many applications of parallel computing, distribution of the data unambiguously implies distribution of work among processors. But there are exceptions where some tasks can be assigned to one of several processors without altering the total volume of communication. In this paper, we study the problem of exploiting this flexibility in assignment of tasks to improve load balance. We first model the problem in terms of network flow and use combinatorial techniques for its solution. Our parametric search algorithms use maximum flow algorithms for probing on a candidate optimal solution value. We describe two algorithms to solve the assignment problem with log W{sub T} and |P| probe calls, where W{sub T} and |P|, respectively, denote the total workload and number of processors. We also define augmenting paths and cuts for this problem, and show that any algorithm based on augmenting paths can be used to find an optimal solution for the task assignment problem. We then consider a continuous version of the problem, and formulate it as a linearly constrained optimization problem, i.e., min ||Ax||{sub {infinity}}, s.t. Bx = d. To avoid solving an intractable {infinity}-norm optimization problem, we show that in this case minimizing the 2-norm is sufficient to minimize the {infinity}-norm, which reduces the problem to the well-studied linearly-constrained least squares problem. The continuous version of the problem has the advantage of being easily amenable to parallelization.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Dynamic Load Balancing for Adaptive Meshes using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often dynamic in the sense that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing inter-processor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view across processors. In this paper, we compare a novel load balancer that utilizes symmetric broadcast networks (SBN) to a successful global load balancing environment (PLUM) created to handle adaptive unstructured applications. Our experimental results on the IBM SP2 demonstrate that performance of the proposed SBN load balancer is comparable to results achieved under PLUM.
Neural Network Algorithm for Particle Loading
J. L. V. Lewandowski
2003-04-25
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.
A High Performance Load Balance Strategy for Real-Time Multicore Systems
Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing
2014-01-01
Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382
Implementation of GAMMON - An efficient load balancing strategy for a local computer system
NASA Technical Reports Server (NTRS)
Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.
1989-01-01
GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.
Migration impact on load balancing - an experience on Amoeba
Zhu, W.; Socko, P.
1996-12-31
Load balancing has been extensive study by simulation, positive results were received in most of the researches. With the increase of the availability oftlistributed systems, a few experiments have been carried out on different systems. These experimental studies either depend on task initiation or task initiation plus task migration. In this paper, we present the results of an 0 study of load balancing using a centralizedpolicy to manage the load on a set of processors, which was carried out on an Amoeba system which consists of a set of 386s and linked by 10 Mbps Ethernet. The results on one hand indicate the necessity of a load balancing facility for a distributed system. On the other hand, the results question the impact of using process migration to increase system performance under the configuration used in our experiments.
Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines
NASA Technical Reports Server (NTRS)
1999-01-01
Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.
Load Balancing Unstructured Adaptive Grids for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1996-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.
BALANCING THE LOAD: A VORONOI BASED SCHEME FOR PARALLEL COMPUTATIONS
Steinberg, Elad; Yalinewich, Almog; Sari, Re'em; Duffell, Paul
2015-01-01
One of the key issues when running a simulation on multiple CPUs is maintaining a proper load balance throughout the run and minimizing communications between CPUs. We propose a novel method of utilizing a Voronoi diagram to achieve a nearly perfect load balance without the need of any global redistributions of data. As a show case, we implement our method in RICH, a two-dimensional moving mesh hydrodynamical code, but it can be extended trivially to other codes in two or three dimensions. Our tests show that this method is indeed efficient and can be used in a large variety of existing hydrodynamical codes.
A location selection policy of live virtual machine migration for power saving and load balancing.
Zhao, Jia; Ding, Yan; Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful.
A Location Selection Policy of Live Virtual Machine Migration for Power Saving and Load Balancing
Xu, Gaochao; Hu, Liang; Dong, Yushuang; Fu, Xiaodong
2013-01-01
Green cloud data center has become a research hotspot of virtualized cloud computing architecture. And load balancing has also been one of the most important goals in cloud data centers. Since live virtual machine (VM) migration technology is widely used and studied in cloud computing, we have focused on location selection (migration policy) of live VM migration for power saving and load balancing. We propose a novel approach MOGA-LS, which is a heuristic and self-adaptive multiobjective optimization algorithm based on the improved genetic algorithm (GA). This paper has presented the specific design and implementation of MOGA-LS such as the design of the genetic operators, fitness values, and elitism. We have introduced the Pareto dominance theory and the simulated annealing (SA) idea into MOGA-LS and have presented the specific process to get the final solution, and thus, the whole approach achieves a long-term efficient optimization for power saving and load balancing. The experimental results demonstrate that MOGA-LS evidently reduces the total incremental power consumption and better protects the performance of VM migration and achieves the balancing of system load compared with the existing research. It makes the result of live VM migration more high-effective and meaningful. PMID:24348165
Fuzzy Pool Balance: An algorithm to achieve a two dimensional balance in distribute storage systems
NASA Astrophysics Data System (ADS)
Wu, Wenjing; Chen, Gang
2014-06-01
The limitation of scheduling modules and the gradual addition of disk pools in distributed storage systems often result in imbalances among their disk pools in terms of both disk usage and file count. This can cause various problems to the storage system such as single point of failure, low system throughput and imbalanced resource utilization and system loads. An algorithm named Fuzzy Pool Balance (FPB) is proposed here to solve this problem. The input of FPB is the current file distribution among disk pools and the output is a file migration plan indicating what files are to be migrated to which pools. FPB uses an array to classify the files by their sizes. The file classification array is dynamically calculated with a defined threshold named Tmax that defines the allowed pool disk usage deviations. File classification is the basis of file migration. FPB also defines the Immigration Pool (IP) and Emigration Pool (EP) according to the pool disk usage and File Quantity Ratio (FQR) that indicates the percentage of each category of files in each disk pool, so files with higher FQR in an EP will be migrated to IP(s) with a lower FQR of this file category. To verify this algorithm, we implemented FPB on an ATLAS Tier2 dCache production system. The results show that FPB can achieve a very good balance in both free space and file counts, and adjusting the threshold value Tmax and the correction factor to the average FQR can achieve a tradeoff between free space and file count.
Dynamic Load Balancing Strategies for Parallel Reacting Flow Simulations
NASA Astrophysics Data System (ADS)
Pisciuneri, Patrick; Meneses, Esteban; Givi, Peyman
2014-11-01
Load balancing in parallel computing aims at distributing the work as evenly as possible among the processors. This is a critical issue in the performance of parallel, time accurate, flow simulators. The constraint of time accuracy requires that all processes must be finished with their calculation for a given time step before any process can begin calculation of the next time step. Thus, an irregularly balanced compute load will result in idle time for many processes for each iteration and thus increased walltimes for calculations. Two existing, dynamic load balancing approaches are applied to the simplified case of a partially stirred reactor for methane combustion. The first is Zoltan, a parallel partitioning, load balancing, and data management library developed at the Sandia National Laboratories. The second is Charm++, which is its own machine independent parallel programming system developed at the University of Illinois at Urbana-Champaign. The performance of these two approaches is compared, and the prospects for their application to full 3D, reacting flow solvers is assessed.
Evaluation of delay performance in valiant load-balancing network
NASA Astrophysics Data System (ADS)
Yu, Yingdi; Jin, Yaohui; Cheng, Hong; Gao, Yu; Sun, Weiqiang; Guo, Wei; Hu, Weisheng
2007-11-01
Network traffic grows in an unpredictable way, which forces network operators to over-provision their backbone network in order to meet the increasing demands. In the consideration of new users, applications and unexpected failures, the utilization is typically below 30% [1]. There are two methods aimed to solve this problem. The first one is to adjust link capacity with the variation of traffic. However in optical network, rapid signaling scheme and large buffer is required. The second method is to use the statistical multiplexing function of IP routers connected point-to-point by optical links to counteract the effect brought by traffic's variation [2]. But the routing mechanism would be much more complex, and introduce more overheads into backbone network. To exert the potential of network and reduce its overhead, the use of Valiant Load-balancing for backbone network has been proposed in order to enhance the utilization of the network and to simplify the routing process. Raising the network utilization and improving throughput would inevitably influence the end-to-end delay. However, the study on delays of Load-balancing is lack. In the work presented in this paper, we study the delay performance in Valiant Load-balancing network, and isolate the queuing delay for modeling and detail analysis. We design the architecture of a switch with the ability of load-balancing for our simulation and experiment, and analyze the relationship between switch architecture and delay performance.
Work Stealing and Persistence-based Load Balancers for Iterative Overdecomposed Applications
Lifflander, Jonathan; Krishnamoorthy, Sriram; Kale, Laxmikant
2012-06-18
Applications often involve iterative execution of identical or slowly evolving calculations. Such applications require good initial load balance coupled with efficient periodic rebalancing. In this paper, we consider the design and evaluation of two distinct approaches to addressing this challenge: persistence-based load balancing and work stealing. The work to be performed is overdecomposed into tasks, enabling automatic rebalancing by the middleware. We present a hierarchical persistence-based rebalancing algorithm that performs localized incremental rebalancing. We also present an active-message-based retentive work stealing algorithm optimized for iterative applications on distributed memory machines. These are shown to incur low overheads and achieve over 90% efficiency on 76,800 cores.
Towards a Load Balancing Middleware for Automotive Infotainment Systems
NASA Astrophysics Data System (ADS)
Khaluf, Yara; Rettberg, Achim
In this paper a middleware for distributed automotive systems is developed. The goal of this middleware is to support the load bal- ancing and service optimization in automotive infotainment and entertainment systems. These systems provide navigation, telecommunication, Internet, audio/video and many other services where a kind of dynamic load balancing mechanisms in addition to service quality optimization mechanisms will be applied by the developed middleware in order to improve the system performance and also at the same time improve the quality of services if possible.
Monitoring dynamic loads on wind tunnel force balances
NASA Technical Reports Server (NTRS)
Ferris, Alice T.; White, William C.
1989-01-01
Two devices have been developed at NASA Langley to monitor the dynamic loads incurred during wind-tunnel testing. The Balance Dynamic Display Unit (BDDU), displays and monitors the combined static and dynamic forces and moments in the orthogonal axes. The Balance Critical Point Analyzer scales and sums each normalized signal from the BDDU to obtain combined dynamic and static signals that represent the dynamic loads at predefined high-stress points. The display of each instrument is a multiplex of six analog signals in a way that each channel is displayed sequentially as one-sixth of the horizontal axis on a single oscilloscope trace. Thus this display format permits the operator to quickly and easily monitor the combined static and dynamic level of up to six channels at the same time.
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
Data Partitioning and Load Balancing in Parallel Disk Systems
NASA Technical Reports Server (NTRS)
Scheuermann, Peter; Weikum, Gerhard; Zabback, Peter
1997-01-01
Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible waves, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces.
Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1997-01-01
Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.
Dual strain gage balance system for measuring light loads
NASA Technical Reports Server (NTRS)
Roberts, Paul W. (Inventor)
1991-01-01
A dual strain gage balance system for measuring normal and axial forces and pitching moment of a metric airfoil model imparted by aerodynamic loads applied to the airfoil model during wind tunnel testing includes a pair of non-metric panels being rigidly connected to and extending towards each other from opposite sides of the wind tunnel, and a pair of strain gage balances, each connected to one of the non-metric panels and to one of the opposite ends of the metric airfoil model for mounting the metric airfoil model between the pair of non-metric panels. Each strain gage balance has a first measuring section for mounting a first strain gage bridge for measuring normal force and pitching moment and a second measuring section for mounting a second strain gage bridge for measuring axial force.
Selective randomized load balancing and mesh networks with changing demands
NASA Astrophysics Data System (ADS)
Shepherd, F. B.; Winzer, P. J.
2006-05-01
We consider the problem of building cost-effective networks that are robust to dynamic changes in demand patterns. We compare several architectures using demand-oblivious routing strategies. Traditional approaches include single-hop architectures based on a (static or dynamic) circuit-switched core infrastructure and multihop (packet-switched) architectures based on point-to-point circuits in the core. To address demand uncertainty, we seek minimum cost networks that can carry the class of hose demand matrices. Apart from shortest-path routing, Valiant's randomized load balancing (RLB), and virtual private network (VPN) tree routing, we propose a third, highly attractive approach: selective randomized load balancing (SRLB). This is a blend of dual-hop hub routing and randomized load balancing that combines the advantages of both architectures in terms of network cost, delay, and delay jitter. In particular, we give empirical analyses for the cost (in terms of transport and switching equipment) for the discussed architectures, based on three representative carrier networks. Of these three networks, SRLB maintains the resilience properties of RLB while achieving significant cost reduction over all other architectures, including RLB and multihop Internet protocol/multiprotocol label switching (IP/MPLS) networks using VPN-tree routing.
Load Balancing at Emergency Departments using ‘Crowdinforming’
Friesen, Marcia R; Strome, Trevor; Mukhi, Shamir; McLoed, Robert
2011-01-01
Background: Emergency Department (ED) overcrowding is an important healthcare issue facing increasing public and regulatory scrutiny in Canada and around the world. Many approaches to alleviate excessive waiting times and lengths of stay have been studied. In theory, optimal ED patient flow may be assisted via balancing patient loads between EDs (in essence spreading patients more evenly throughout this system). This investigation utilizes simulation to explore “Crowdinforming” as a basis for a process control strategy aimed to balance patient loads between six EDs within a mid-sized Canadian city. Methods: Anonymous patient visit data comprising 120,000 ED patient visits over six months to six ED facilities were obtained from the region’s Emergency Department Information System (EDIS) to (1) determine trends in ED visits and interactions between parameters; (2) to develop a process control strategy integrating crowdinforming; and, (3) apply and evaluate the model in a simulated environment to explore the potential impact on patient self-redirection and load balancing between EDs. Results: As in reality, the data available and subsequent model demonstrated that there are many factors that impact ED patient flow. Initial results suggest that for this particular data set used, ED arrival rates were the most useful metric for ED ‘busyness’ in a process control strategy, and that Emergency Department performance may benefit from load balancing efforts. Conclusions: The simulation supports the use of crowdinforming as a potential tool when used in a process control strategy to balance the patient loads between EDs. The work also revealed that the value of several parameters intuitively expected to be meaningful metrics of ED ‘busyness’ was not evident, highlighting the importance of finding parameters meaningful within one’s particular data set. The information provided in the crowdinforming model is already available in a local context at some ED sites
An efficient algorithm using matrix methods to solve wind tunnel force-balance equations
NASA Technical Reports Server (NTRS)
Smith, D. L.
1972-01-01
An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.
Population-based learning of load balancing policies for a distributed computer system
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; Wah, Benjamin W.
1993-01-01
Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.
Priority-rotating DBA with adaptive load balance for reconfigurable WDM/TDM PON
NASA Astrophysics Data System (ADS)
Xia, Weidong; Gan, Chaoqin; Xie, Weilun; Ni, Cuiping
2015-12-01
To the wavelength-division multiplexing/time-division multiplexing passive optical network (WDM/TDM PON) architecture that implements wavelength sharing and traffic redirection, a priority-rotating dynamic bandwidth allocation (DBA) algorithm is proposed in this paper. The priority of each ONU is set and rotated to meet the bandwidth demand and guarantee the fairness among optical network units (ONUs). The bandwidth allocation for priority queues is employed to avoid bandwidth monopolization and over-allocation. The bandwidth allocation for high-loaded situation and redirected traffic are discussed to achieve adaptive load balance over wavelengths and among ONUs. The simulation results show a good performance of the proposed algorithm in throughput rate and average packet delay.
NASA Technical Reports Server (NTRS)
Richardson, J.; Labbe, M.; Belala, Y.; Leduc, Vincent
1994-01-01
The requirement for improving aircraft utilization and responsiveness in airlift operations has been recognized for quite some time by the Canadian Forces. To date, the utilization of scarce airlift resources has been planned mainly through the employment of manpower-intensive manual methods in combination with the expertise of highly qualified personnel. In this paper, we address the problem of facilitating the load planning process for military aircraft cargo planes through the development of a computer-based system. We introduce TALBAS (Transport Aircraft Loading and BAlancing System), a knowledge-based system designed to assist personnel involved in preparing valid load plans for the C130 Hercules aircraft. The main features of this system which are accessible through a convivial graphical user interface, consists of the automatic generation of valid cargo arrangements given a list of items to be transported, the user-definition of load plans and the automatic validation of such load plans.
Biologically inspired load balancing mechanism in neocortical competitive learning.
Tal, Amir; Peled, Noam; Siegelmann, Hava T
2014-01-01
A unique delayed self-inhibitory pathway mediated by layer 5 Martinotti Cells was studied in a biologically inspired neural network simulation. Inclusion of this pathway along with layer 5 basket cell lateral inhibition caused balanced competitive learning, which led to the formation of neuronal clusters as were indeed reported in the same region. Martinotti pathway proves to act as a learning "conscience," causing overly successful regions in the network to restrict themselves and let others fire. It thus spreads connectivity more evenly throughout the net and solves the "dead unit" problem of clustering algorithms in a local and biologically plausible manner.
Estimating nutrient loadings using chemical mass balance approach.
Jain, C K; Singhal, D C; Sharma, M K
2007-11-01
The river Hindon is one of the important tributaries of river Yamuna in western Uttar Pradesh (India) and carries pollution loads from various municipal and industrial units and surrounding agricultural areas. The main sources of pollution in the river include municipal wastes from Saharanpur, Muzaffarnagar and Ghaziabad urban areas and industrial effluents of sugar, pulp and paper, distilleries and other miscellaneous industries through tributaries as well as direct inputs. In this paper, chemical mass balance approach has been used to assess the contribution from non-point sources of pollution to the river. The river system has been divided into three stretches depending on the land use pattern. The contribution of point sources in the upper and lower stretches are 95 and 81% respectively of the total flow of the river while there is no point source input in the middle stretch. Mass balance calculations indicate that contribution of nitrate and phosphate from non-point sources amounts to 15.5 and 6.9% in the upper stretch and 13.1 and 16.6% in the lower stretch respectively. Observed differences in the load along the river may be attributed to uncharacterized sources of pollution due to agricultural activities, remobilization from or entrainment of contaminated bottom sediments, ground water contribution or a combination of these sources. PMID:17616829
Estimating nutrient loadings using chemical mass balance approach.
Jain, C K; Singhal, D C; Sharma, M K
2007-11-01
The river Hindon is one of the important tributaries of river Yamuna in western Uttar Pradesh (India) and carries pollution loads from various municipal and industrial units and surrounding agricultural areas. The main sources of pollution in the river include municipal wastes from Saharanpur, Muzaffarnagar and Ghaziabad urban areas and industrial effluents of sugar, pulp and paper, distilleries and other miscellaneous industries through tributaries as well as direct inputs. In this paper, chemical mass balance approach has been used to assess the contribution from non-point sources of pollution to the river. The river system has been divided into three stretches depending on the land use pattern. The contribution of point sources in the upper and lower stretches are 95 and 81% respectively of the total flow of the river while there is no point source input in the middle stretch. Mass balance calculations indicate that contribution of nitrate and phosphate from non-point sources amounts to 15.5 and 6.9% in the upper stretch and 13.1 and 16.6% in the lower stretch respectively. Observed differences in the load along the river may be attributed to uncharacterized sources of pollution due to agricultural activities, remobilization from or entrainment of contaminated bottom sediments, ground water contribution or a combination of these sources.
Strain gage selection in loads equations using a genetic algorithm
NASA Technical Reports Server (NTRS)
1994-01-01
Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.
A Hybrid Ant Colony Algorithm for Loading Pattern Optimization
NASA Astrophysics Data System (ADS)
Hoareau, F.
2014-06-01
Electricité de France (EDF) operates 58 nuclear power plant (NPP), of the Pressurized Water Reactor (PWR) type. The loading pattern (LP) optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R&D has developed automatic optimization tools that assist the experts. The latter can resort, for instance, to a loading pattern optimization software based on ant colony algorithm. This paper presents an analysis of the search space of a few realistic loading pattern optimization problems. This analysis leads us to introduce a hybrid algorithm based on ant colony and a local search method. We then show that this new algorithm is able to generate loading patterns of good quality.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1990-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
A Baseline Load Schedule for the Manual Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.
2013-01-01
A baseline load schedule for the manual calibration of a force balance was developed that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, the aft gage location, and the balance moment center; (iv) the balance should be used in UP and DOWN orientation to get axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. Three different approaches are also reviewed that may be used to independently estimate the natural zeros of the balance. These three approaches provide gage output differences that may be used to estimate the weight of both the metric and non-metric part of the balance. Manual calibration data of NASA s MK29A balance and machine calibration data of NASA s MC60D balance are used to illustrate and evaluate different aspects of the proposed baseline load schedule design.
Load-balanced parallel streamline generation on large scale vector fields.
Nouanesengsy, Boonthanome; Lee, Teng-Yok; Shen, Han-Wei
2011-12-01
Because of the ever increasing size of output data from scientific simulations, supercomputers are increasingly relied upon to generate visualizations. One use of supercomputers is to generate field lines from large scale flow fields. When generating field lines in parallel, the vector field is generally decomposed into blocks, which are then assigned to processors. Since various regions of the vector field can have different flow complexity, processors will require varying amounts of computation time to trace their particles, causing load imbalance, and thus limiting the performance speedup. To achieve load-balanced streamline generation, we propose a workload-aware partitioning algorithm to decompose the vector field into partitions with near equal workloads. Since actual workloads are unknown beforehand, we propose a workload estimation algorithm to predict the workload in the local vector field. A graph-based representation of the vector field is employed to generate these estimates. Once the workloads have been estimated, our partitioning algorithm is hierarchically applied to distribute the workload to all partitions. We examine the performance of our workload estimation and workload-aware partitioning algorithm in several timings studies, which demonstrates that by employing these methods, better scalability can be achieved with little overhead. PMID:22034295
A Baseline Load Schedule for the Manual Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.
2013-01-01
A baseline load schedule for the manual calibration of a force balance is defined that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The chosen load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, aft gage location, and the balance moment center; (iv) the balance should be used in "up" and "down" orientation to get positive and negative axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. In addition, three different approaches are discussed in the paper that may be used to independently estimate the natural zeros, i.e., the gage outputs of the absolute load datum of the balance. These three approaches provide gage output differences that can be used to estimate the weight of both the metric and non-metric part of the balance. Data from the calibration of a six-component force balance will be used in the final manuscript of the paper to illustrate characteristics of the proposed baseline load schedule.
Dynamic load balancing in a concurrent plasma PIC code on the JPL/Caltech Mark III hypercube
Liewer, P.C.; Leaver, E.W.; Decyk, V.K.; Dawson, J.M.
1990-12-31
Dynamic load balancing has been implemented in a concurrent one-dimensional electromagnetic plasma particle-in-cell (PIC) simulation code using a method which adds very little overhead to the parallel code. In PIC codes, the orbits of many interacting plasma electrons and ions are followed as an initial value problem as the particles move in electromagnetic fields calculated self-consistently from the particle motions. The code was implemented using the GCPIC algorithm in which the particles are divided among processors by partitioning the spatial domain of the simulation. The problem is load-balanced by partitioning the spatial domain so that each partition has approximately the same number of particles. During the simulation, the partitions are dynamically recreated as the spatial distribution of the particles changes in order to maintain processor load balance.
Carmichael, H.
1953-01-01
A torsional-type analytical balance designed to arrive at its equilibrium point more quickly than previous balances is described. In order to prevent external heat sources creating air currents inside the balance casing that would reiard the attainment of equilibrium conditions, a relatively thick casing shaped as an inverted U is placed over the load support arms and the balance beam. This casing is of a metal of good thernnal conductivity characteristics, such as copper or aluminum, in order that heat applied to one portion of the balance is quickly conducted to all other sensitive areas, thus effectively preventing the fornnation of air currents caused by unequal heating of the balance.
GRACOS: Scalable and Load Balanced P3M Cosmological N-body Code
NASA Astrophysics Data System (ADS)
Shirokov, Alexander; Bertschinger, Edmund
2010-10-01
The GRACOS (GRAvitational COSmology) code, a parallel implementation of the particle-particle/particle-mesh (P3M) algorithm for distributed memory clusters, uses a hybrid method for both computation and domain decomposition. Long-range forces are computed using a Fourier transform gravity solver on a regular mesh; the mesh is distributed across parallel processes using a static one-dimensional slab domain decomposition. Short-range forces are computed by direct summation of close pairs; particles are distributed using a dynamic domain decomposition based on a space-filling Hilbert curve. A nearly-optimal method was devised to dynamically repartition the particle distribution so as to maintain load balance even for extremely inhomogeneous mass distributions. Tests using 800(3) simulations on a 40-processor beowulf cluster showed good load balance and scalability up to 80 processes. There are limits on scalability imposed by communication and extreme clustering which may be removed by extending the algorithm to include adaptive mesh refinement.
Electricity load forecasting using support vector regression with memetic algorithms.
Hu, Zhongyi; Bao, Yukun; Xiong, Tao
2013-01-01
Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature.
Combined Load Diagram for a Wind Tunnel Strain-Gage Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
Combined Load Diagrams for Direct-Read, Force, and Moment Balances are discussed in great detail in the paper. The diagrams, if compared with a corresponding combined load plot of a balance calibration data set, may be used to visualize and interpret basic relationships between the applied balance calibration loads and the load components at the forward and aft gage of a strain-age balance. Lines of constant total force and moment are identified in the diagrams. In addition, the lines of pure force and pure moment are highlighted. Finally, lines of constant moment arm are depicted. It is also demonstrated that each quadrant of a Combined Load Diagram has specific regions where the applied total calibration force is at, between, or outside of the balance gage locations. Data from the manual calibration of a Force Balance is used to illustrate the application of a Combined Load Diagram to a realistic data set.
The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava
2016-08-01
This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.
Mori, Yoshiharu; Okumura, Hisashi
2015-12-01
Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm.
Dynamic load balancing of matrix-vector multiplications on roadrunner compute nodes
Sancho Pitarch, Jose Carlos
2009-01-01
Hybrid architectures that combine general purpose processors with accelerators are being adopted in several large-scale systems such as the petaflop Roadrunner supercomputer at Los Alamos. In this system, dual-core Opteron host processors are tightly coupled with PowerXCell 8i processors within each compute node. In this kind of hybrid architecture, an accelerated mode of operation is typically used to offload performance hotspots in the computation to the accelerators. In this paper we explore the suitability of a variant of this acceleration mode in which the performance hotspots are actually shared between the host and the accelerators. To achieve this we have designed a new load balancing algorithm, which is optimized for the Roadrunner compute nodes, to dynamically distribute computation and associated data between the host and the accelerators at runtime. Results are presented using this approach for sparse and dense matrix-vector multiplications that show load-balancing can improve performance by up to 24% over solely using the accelerators.
Heyland, Mark; Trepczynski, Adam; Duda, Georg N; Zehn, Manfred; Schaser, Klaus-Dieter; Märdian, Sven
2015-12-01
Selection of boundary constraints may influence amount and distribution of loads. The purpose of this study is to analyze the potential of inertia relief and follower load to maintain the effects of musculoskeletal loads even under large deflections in patient specific finite element models of intact or fractured bone compared to empiric boundary constraints which have been shown to lead to physiological displacements and surface strains. The goal is to elucidate the use of boundary conditions in strain analyses of bones. Finite element models of the intact femur and a model of clinically relevant fracture stabilization by locking plate fixation were analyzed with normal walking loading conditions for different boundary conditions, specifically re-balanced loading, inertia relief and follower load. Peak principal cortex surface strains for different boundary conditions are consistent (maximum deviation 13.7%) except for inertia relief without force balancing (maximum deviation 108.4%). Influence of follower load on displacements increases with higher deflection in fracture model (from 3% to 7% for force balanced model). For load balanced models, follower load had only minor influence, though the effect increases strongly with higher deflection. Conventional constraints of fixed nodes in space should be carefully reconsidered because their type and position are challenging to justify and for their potential to introduce relevant non-physiological reaction forces. Inertia relief provides an alternative method which yields physiological strain results.
Tseng, Chinyang Henry
2016-01-01
In wireless networks, low-power Zigbee is an excellent network solution for wireless medical monitoring systems. Medical monitoring generally involves transmission of a large amount of data and easily causes bottleneck problems. Although Zigbee's AODV mesh routing provides extensible multi-hop data transmission to extend network coverage, it originally does not, and needs to support some form of load balancing mechanism to avoid bottlenecks. To guarantee a more reliable multi-hop data transmission for life-critical medical applications, we have developed a multipath solution, called Load-Balanced Multipath Routing (LBMR) to replace Zigbee's routing mechanism. LBMR consists of three main parts: Layer Routing Construction (LRC), a Load Estimation Algorithm (LEA), and a Route Maintenance (RM) mechanism. LRC assigns nodes into different layers based on the node's distance to the medical data gateway. Nodes can have multiple next-hops delivering medical data toward the gateway. All neighboring layer-nodes exchange flow information containing current load, which is the used by the LEA to estimate future load of next-hops to the gateway. With LBMR, nodes can choose the neighbors with the least load as the next-hops and thus can achieve load balancing and avoid bottlenecks. Furthermore, RM can detect route failures in real-time and perform route redirection to ensure routing robustness. Since LRC and LEA prevent bottlenecks while RM ensures routing fault tolerance, LBMR provides a highly reliable routing service for medical monitoring. To evaluate these accomplishments, we compare LBMR with Zigbee's AODV and another multipath protocol, AOMDV. The simulation results demonstrate LBMR achieves better load balancing, less unreachable nodes, and better packet delivery ratio than either AODV or AOMDV. PMID:27258297
Tseng, Chinyang Henry
2016-01-01
In wireless networks, low-power Zigbee is an excellent network solution for wireless medical monitoring systems. Medical monitoring generally involves transmission of a large amount of data and easily causes bottleneck problems. Although Zigbee’s AODV mesh routing provides extensible multi-hop data transmission to extend network coverage, it originally does not, and needs to support some form of load balancing mechanism to avoid bottlenecks. To guarantee a more reliable multi-hop data transmission for life-critical medical applications, we have developed a multipath solution, called Load-Balanced Multipath Routing (LBMR) to replace Zigbee’s routing mechanism. LBMR consists of three main parts: Layer Routing Construction (LRC), a Load Estimation Algorithm (LEA), and a Route Maintenance (RM) mechanism. LRC assigns nodes into different layers based on the node’s distance to the medical data gateway. Nodes can have multiple next-hops delivering medical data toward the gateway. All neighboring layer-nodes exchange flow information containing current load, which is the used by the LEA to estimate future load of next-hops to the gateway. With LBMR, nodes can choose the neighbors with the least load as the next-hops and thus can achieve load balancing and avoid bottlenecks. Furthermore, RM can detect route failures in real-time and perform route redirection to ensure routing robustness. Since LRC and LEA prevent bottlenecks while RM ensures routing fault tolerance, LBMR provides a highly reliable routing service for medical monitoring. To evaluate these accomplishments, we compare LBMR with Zigbee’s AODV and another multipath protocol, AOMDV. The simulation results demonstrate LBMR achieves better load balancing, less unreachable nodes, and better packet delivery ratio than either AODV or AOMDV. PMID:27258297
Algorithms for Determining Physical Responses of Structures Under Load
NASA Technical Reports Server (NTRS)
Richards, W. Lance; Ko, William L.
2012-01-01
Ultra-efficient real-time structural monitoring algorithms have been developed to provide extensive information about the physical response of structures under load. These algorithms are driven by actual strain data to measure accurately local strains at multiple locations on the surface of a structure. Through a single point load calibration test, these structural strains are then used to calculate key physical properties of the structure at each measurement location. Such properties include the structure s flexural rigidity (the product of the structure's modulus of elasticity, and its moment of inertia) and the section modulus (the moment of inertia divided by the structure s half-depth). The resulting structural properties at each location can be used to determine the structure s bending moment, shear, and structural loads in real time while the structure is in service. The amount of structural information can be maximized through the use of highly multiplexed fiber Bragg grating technology using optical time domain reflectometry and optical frequency domain reflectometry, which can provide a local strain measurement every 10 mm on a single hair-sized optical fiber. Since local strain is used as input to the algorithms, this system serves multiple purposes of measuring strains and displacements, as well as determining structural bending moment, shear, and loads for assessing real-time structural health. The first step is to install a series of strain sensors on the structure s surface in such a way as to measure bending strains at desired locations. The next step is to perform a simple ground test calibration. For a beam of length l (see example), discretized into n sections and subjected to a tip load of P that places the beam in bending, the flexural rigidity of the beam can be experimentally determined at each measurement location x. The bending moment at each station can then be determined for any general set of loads applied during operation.
Experience with automatic, dynamic load balancing and adaptive finite element computation
Wheat, S.R.; Devine, K.D.; Maccabe, A.B.
1993-10-01
Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.
Genetic Algorithm based Decentralized PI Type Controller: Load Frequency Control
NASA Astrophysics Data System (ADS)
Dwivedi, Atul; Ray, Goshaidas; Sharma, Arun Kumar
2016-12-01
This work presents a design of decentralized PI type Linear Quadratic (LQ) controller based on genetic algorithm (GA). The proposed design technique allows considerable flexibility in defining the control objectives and it does not consider any knowledge of the system matrices and moreover it avoids the solution of algebraic Riccati equation. To illustrate the results of this work, a load-frequency control problem is considered. Simulation results reveal that the proposed scheme based on GA is an alternative and attractive approach to solve load-frequency control problem from both performance and design point of views.
Indirect addressing and load balancing for faster solution to Mandelbrot Set on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1989-01-01
SIMD computers with local indirect addressing allow programs to have queues and buffers, making certain kinds of problems much more efficient. Examined here are a class of problems characterized by computations on data points where the computation is identical, but the convergence rate is data dependent. Normally, in this situation, the algorithm time is governed by the maximum number of iterations required by each point. Using indirect addressing allows a processor to proceed to the next data point when it is done, reducing the overall number of iterations required to approach the mean convergence rate when a sufficiently large problem set is solved. Load balancing techniques can be applied for additional performance improvement. Simulations of this technique applied to solving Mandelbrot Sets indicate significant performance gains.
Comparison of Iterative and Non-Iterative Strain-Gage Balance Load Calculation Methods
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
The accuracy of iterative and non-iterative strain-gage balance load calculation methods was compared using data from the calibration of a force balance. Two iterative and one non-iterative method were investigated. In addition, transformations were applied to balance loads in order to process the calibration data in both direct read and force balance format. NASA's regression model optimization tool BALFIT was used to generate optimized regression models of the calibration data for each of the three load calculation methods. This approach made sure that the selected regression models met strict statistical quality requirements. The comparison of the standard deviation of the load residuals showed that the first iterative method may be applied to data in both the direct read and force balance format. The second iterative method, on the other hand, implicitly assumes that the primary gage sensitivities of all balance gages exist. Therefore, the second iterative method only works if the given balance data is processed in force balance format. The calibration data set was also processed using the non-iterative method. Standard deviations of the load residuals for the three load calculation methods were compared. Overall, the standard deviations show very good agreement. The load prediction accuracies of the three methods appear to be compatible as long as regression models used to analyze the calibration data meet strict statistical quality requirements. Recent improvements of the regression model optimization tool BALFIT are also discussed in the paper.
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Assessment of New Load Schedules for the Machine Calibration of a Force Balance
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Gisler, R.; Kew, R.
2015-01-01
New load schedules for the machine calibration of a six-component force balance are currently being developed and evaluated at the NASA Ames Balance Calibration Laboratory. One of the proposed load schedules is discussed in the paper. It has a total of 2082 points that are distributed across 16 load series. Several criteria were applied to define the load schedule. It was decided, for example, to specify the calibration load set in force balance format as this approach greatly simplifies the definition of the lower and upper bounds of the load schedule. In addition, all loads are assumed to be applied in a calibration machine by using the one-factor-at-a-time approach. At first, all single-component loads are applied in six load series. Then, three two-component load series are applied. They consist of the load pairs (N1, N2), (S1, S2), and (RM, AF). Afterwards, four three-component load series are applied. They consist of the combinations (N1, N2, AF), (S1, S2, AF), (N1, N2, RM), and (S1, S2, RM). In the next step, one four-component load series is applied. It is the load combination (N1, N2, S1, S2). Finally, two five-component load series are applied. They are the load combination (N1, N2, S1, S2, AF) and (N1, N2, S1, S2, RM). The maximum difference between loads of two subsequent data points of the load schedule is limited to 33 % of capacity. This constraint helps avoid unwanted load "jumps" in the load schedule that can have a negative impact on the performance of a calibration machine. Only loadings of the single- and two-component load series are loaded to 100 % of capacity. This approach was selected because it keeps the total number of calibration points to a reasonable limit while still allowing for the application of some of the more complex load combinations. Data from two of NASA's force balances is used to illustrate important characteristics of the proposed 2082-point calibration load schedule.
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Simon, Horst D.; Sohn, Andrew
1996-01-01
The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.
Fast computing global structural balance in signed networks based on memetic algorithm
NASA Astrophysics Data System (ADS)
Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng
2014-12-01
Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.
Balancing the Load: How to Engage Counselors in School Improvement
ERIC Educational Resources Information Center
Mallory, Barbara J.; Jackson, Mary H.
2007-01-01
Principals cannot lead the school improvement process alone. They must enlist the help of others in the school community. School counselors, whose role is often viewed as peripheral and isolated from teaching and learning, can help principals, teachers, students, and parents balance the duties and responsibilities involved in continuous student…
Dynamic Load Balancing Data Centric Storage for Wireless Sensor Networks
Song, Seokil; Bok, Kyoungsoo; Kwak, Yun Sik; Goo, Bongeun; Kwak, Youngsik; Ko, Daesik
2010-01-01
In this paper, a new data centric storage that is dynamically adapted to the work load changes is proposed. The proposed data centric storage distributes the load of hot spot areas to neighboring sensor nodes by using a multilevel grid technique. The proposed method is also able to use existing routing protocols such as GPSR (Greedy Perimeter Stateless Routing) with small changes. Through simulation, the proposed method enhances the lifetime of sensor networks over one of the state-of-the-art data centric storages. We implement the proposed method based on an operating system for sensor networks, and evaluate the performance through running based on a simulation tool. PMID:22163472
Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.
Optimal scheduling and balancing of multiprogrammed loads over heterogeneous processors
Haddad, E.
1995-12-01
The serial/parallel allocation of multiprogrammed load modules among heterogeneous virtual memory processors is formulated as an instance of the separable resource allocation problem with nonconvex functions. Mild conditions on the functions leads to an efficient solution in constant time.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our
Thulasidasan, Sunil; Kasiviswanathan, Shiva; Eidenbenz, Stephan; Romero, Philip
2010-01-01
We re-examine the problem of load balancing in conservatively synchronized parallel, discrete-event simulations executed on high-performance computing clusters, focusing on simulations where computational and messaging load tend to be spatially clustered. Such domains are frequently characterized by the presence of geographic 'hot-spots' - regions that generate significantly more simulation events than others. Examples of such domains include simulation of urban regions, transportation networks and networks where interaction between entities is often constrained by physical proximity. Noting that in conservatively synchronized parallel simulations, the speed of execution of the simulation is determined by the slowest (i.e most heavily loaded) simulation process, we study different partitioning strategies in achieving equitable processor-load distribution in domains with spatially clustered load. In particular, we study the effectiveness of partitioning via spatial scattering to achieve optimal load balance. In this partitioning technique, nearby entities are explicitly assigned to different processors, thereby scattering the load across the cluster. This is motivated by two observations, namely, (i) since load is spatially clustered, spatial scattering should, intuitively, spread the load across the compute cluster, and (ii) in parallel simulations, equitable distribution of CPU load is a greater determinant of execution speed than message passing overhead. Through large-scale simulation experiments - both of abstracted and real simulation models - we observe that scatter partitioning, even with its greatly increased messaging overhead, significantly outperforms more conventional spatial partitioning techniques that seek to reduce messaging overhead. Further, even if hot-spots change over the course of the simulation, if the underlying feature of spatial clustering is retained, load continues to be balanced with spatial scattering leading us to the observation that
Sivakumar, B.; Bhalaji, N.; Sivakumar, D.
2014-01-01
In mobile ad hoc networks connectivity is always an issue of concern. Due to dynamism in the behavior of mobile nodes, efficiency shall be achieved only with the assumption of good network infrastructure. Presence of critical links results in deterioration which should be detected in advance to retain the prevailing communication setup. This paper discusses a short survey on the specialized algorithms and protocols related to energy efficient load balancing for critical link detection in the recent literature. This paper also suggests a machine learning based hybrid power-aware approach for handling critical nodes via load balancing. PMID:24790546
Sivakumar, B; Bhalaji, N; Sivakumar, D
2014-01-01
In mobile ad hoc networks connectivity is always an issue of concern. Due to dynamism in the behavior of mobile nodes, efficiency shall be achieved only with the assumption of good network infrastructure. Presence of critical links results in deterioration which should be detected in advance to retain the prevailing communication setup. This paper discusses a short survey on the specialized algorithms and protocols related to energy efficient load balancing for critical link detection in the recent literature. This paper also suggests a machine learning based hybrid power-aware approach for handling critical nodes via load balancing.
Coupling Algorithms for Calculating Sensitivities of Population Balances
Man, P. L. W.; Kraft, M.; Norris, J. R.
2008-09-01
We introduce a new class of stochastic algorithms for calculating parametric derivatives of the solution of the space-homogeneous Smoluchowski's coagulation equation. Currently, it is very difficult to produce low variance estimates of these derivatives in reasonable amounts of computational time through the use of stochastic methods. These new algorithms consider a central difference estimator of the parametric derivative which is calculated by evaluating the coagulation equation at two different parameter values simultaneously, and causing variance reduction by maximising the covariance between these. The two different coupling strategies ('Single' and 'Double') have been compared to the case when there is no coupling ('Independent'). Both coupling algorithms converge and the Double coupling is the most 'efficient' algorithm. For the numerical example chosen we obtain a factor of about 100 in efficiency in the best case (small system evolution time and small parameter perturbation)
Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs
Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang
2015-01-01
Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n2), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k2n2) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results. PMID:26491652
Load-balancing techniques for a parallel electromagnetic particle-in-cell code
PLIMPTON,STEVEN J.; SEIDEL,DAVID B.; PASIK,MICHAEL F.; COATS,REBECCA S.
2000-01-01
QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
A balanced decomposition algorithm for parallel solutions of very large sparse systems
Zecevic, A.I.; Siljak, D.D.
1995-12-01
In this paper we present an algorithm for balanced bordered block diagonal (BBD) decompositions of very large symmetric positive definite or diagonally dominant sparse matrices. The algorithm represents a generalization of the method described, and is primarily aimed at parallel solutions of very large sparse systems (> 20,000 equations). A variety of experimental results are provided to illustrate the performance of the algorithm and demonstrate its potential for computing on massively parallel architectures.
A model for resource-aware load balancing on heterogeneous clusters.
Devine, Karen Dragon; Flaherty, Joseph E.; Teresco, James Douglas; Gervasio Luis G.; Faik, Jamal
2005-05-01
We address the problem of partitioning and dynamic load balancing on clusters with heterogeneous hardware resources. We propose DRUM, a model that encapsulates hardware resources and their interconnection topology. DRUM provides monitoring facilities for dynamic evaluation of communication, memory, and processing capabilities. Heterogeneity is quantified by merging the information from the monitors to produce a scalar number called 'power.' This power allows DRUM to be used easily by existing load-balancing procedures such as those in the Zoltan Toolkit while placing minimal burden on application programmers. We demonstrate the use of DRUM to guide load balancing in the adaptive solution of a Laplace equation on a heterogeneous cluster. We observed a significant reduction in execution time compared to traditional methods.
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.
Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente
2015-08-10
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively.
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.
Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic
Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning
Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2013-11-17
In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.
Effect of armor and carrying load on body balance and leg muscle function.
Park, Huiju; Branson, Donna; Kim, Seonyoung; Warren, Aric; Jacobson, Bert; Petrova, Adriana; Peksoz, Semra; Kamenidis, Panagiotis
2014-01-01
This study investigated the impact of weight and weight distribution of body armor and load carriage on static body balance and leg muscle function. A series of human performance tests were conducted with seven male, healthy, right-handed military students in seven garment conditions with varying weight and weight distributions. Static body balance was assessed by analyzing the trajectory of center of plantar pressure and symmetry of weight bearing in the feet. Leg muscle functions were assessed by analyzing the peak electromyography amplitude of four selected leg muscles during walking. Results of this study showed that uneven weight distribution of garment and load beyond an additional 9 kg impaired static body balance as evidenced by increased sway of center of plantar pressure and asymmetry of weight bearing in the feet. Added weight on non-dominant side of the body created greater impediment to static balance. Increased garment weight also elevated peak EMG amplitude in the rectus femoris to maintain body balance and in the medial gastrocnemius to increase propulsive force. Negative impacts on balance and leg muscle function with increased carrying loads, particularly with an uneven weight distribution, should be stressed to soldiers, designers, and sports enthusiasts.
Load Balancing and Scalability of a Subgrid Orography Scheme in a Global Climate Model
Ghan, Steven J.; Shippert, Timothy R.
2005-09-01
A subgrid orography scheme has been applied to the National Center for Atmospheric Research Community Atmosphere Model. The scheme applies all of the model column physics to each of up to eleven elevation classes within each grid cell. The distribution of the number of elevation classes in each grid cell is highly inhomogeneous. This could produce a serious load imbalance if the domain decomposition distributes grid cells evenly across processors. But since the distribution of classes is static, static load balancing can be used to distribute the elevation classes uniformly across processors. The load balancing is accomplished first by distributing the number of classes evenly within each node. The number of chunks on nodes is distributed uniformly across nodes and the dynamics-physics transpose cost is minimized by assigning chunks to nodes with the most dynamics grid cells from that chunk. Parallel efficiency with the subgrid scheme and load balancing exceeds parallel efficiency without the subgrid scheme for up to 128 processors. The load balancing across nodes decreases runtime by 10-30% depending on configuration.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
The effects of backpack load carrying on dynamic balance as measured by limits of stability.
Palumbo, Nicole; George, Brad; Johnson, Amanda; Cade, Dennis
2001-01-01
PURPOSE: The purpose of this research was to determine if standing dynamic balance was affected by carrying a backpack. SUBJECTS: Data was obtained from 50 healthy college students. MATERIALS AND METHODS: Limits of stability was assessed using the Smart Equitest Balance Master System(R). Reaction time, movement velocity, end point excursion, maximum excursion, and directional control were measured to evaluate movement, with and without a loaded backpack. DATA ANALYSIS: Reliability was established using an Intra-Class Correlation Coefficient (2,1). MANOVA was utilized to analyze the effect of the backpack. SUMMARY DATA: Movement velocity significantly decreased during backpack loaded trials (p=0.004). Directional control was significantly different with respect to direction (p=0.006). No significant difference in reaction time, maximum excursion, or end point excursion was observed with backpack loading (p=0.10-0.93). CONCLUSION: This study concludes that backpack load carrying has an effect on movement velocity and directional control. PMID:12441465
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.
Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.
2012-01-01
We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.
Design and implementation of web server soft load balancing in small and medium-sized enterprise
NASA Astrophysics Data System (ADS)
Yan, Liu
2011-12-01
With the expansion of business scale, small and medium-sized enterprises began to use information platform to improve their management and competition ability, the server becomes the core factor which restricts the enterprise's infomationization construction. This paper puts forward a suitable design scheme for small and medium-sized enterprise web server soft load balancing, and proved it effective through experiment.
Portable Parallel Programming for the Dynamic Load Balancing of Unstructured Grid Applications
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Das, Sajal K.; Harvey, Daniel; Oliker, Leonid
1999-01-01
The ability to dynamically adapt an unstructured -rid (or mesh) is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult, particularly from the view point of portability on various multiprocessor platforms We address this problem by developing PLUM, tin automatic anti architecture-independent framework for adaptive numerical computations in a message-passing environment. Portability is demonstrated by comparing performance on an SP2, an Origin2000, and a T3E, without any code modifications. We also present a general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication pattern, with a goal to providing a global view of system loads across processors. Experiments on, an SP2 and an Origin2000 demonstrate the portability of our approach which achieves superb load balance at the cost of minimal extra overhead.
Gammon - A load balancing strategy for local computer systems with multiaccess networks
NASA Technical Reports Server (NTRS)
Baumgartner, Katherine M.; Wah, Benjamin W.
1989-01-01
Consideration is given to an efficient load-balancing strategy, Gammon (global allocation from maximum to minimum in constant time), for distributed computing systems connected by multiaccess local area networks. The broadcast capability of these networks is utilized to implement an identification procedure at the applications level for the maximally and the minimally loaded processors. The search technique has an average overhead which is independent of the number of participating stations. An implementation of Gammon on a network of Sun workstations is described. Its performance is found to be better than that of other known methods.
NASA Astrophysics Data System (ADS)
Engelder, Terry; Fischer, Mark P.
1996-05-01
Using the Griffith energy-balance concept to model joint propagation in the brittle crust, two laboratory loading configurations serve as appropriate analogs for in situ conditions: the dead-weight load and the fixed-grips load. The distinction between these loading configurations is based largely on whether or not a loaded boundary moves as a joint grows. During displacement of a loaded boundary, the energy necessary for joint propagation comes from work by the dead weight (i.e., a remote stress). When the loaded boundary remains stationary, as if held by rigid grips, the energy for joint propagation develops upon release of elastic strain energy within the rock mass. These two generic loading configurations serve as models for four common natural loading configurations: a joint-normal load; a thermoelastic load; a fluid load; and an axial load. Each loading configuration triggers a different joint-driving mechanism, each of which is the release of energy through elastic strain and/or work. The four mechanisms for energy release are joint-normal stretching, elastic contraction, poroelastic contraction under either a constant fluid drive or fluid decompression, and axial shortening, respectively. Geological circumstances favoring each of the joint-driving mechanisms are as follows. The release of work under joint-normal stretching occurs whenever layer-parallel extension keeps pace with slow or subcritical joint propagation. Under fixed grips, a substantial crack-normal tensile stress can accumulate by thermoelastic contraction until joint propagation is driven by the release of elastic strain energy. Within the Earth the rate of joint propagation dictates which of these two driving mechanisms operates, with faster propagation driven by release of strain energy. Like a dead-weight load acting to separate the joint walls, pore fluid exerts a traction on the interior of some joints. Joint propagation under fluid loading may be driven by a release of elastic strain
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
Strain-balanced MQW pin solar cells grown using a robot-loading showerhead reactor
NASA Astrophysics Data System (ADS)
Roberts, J. S.; Airey, R.; Hill, G.; Calder, C.; Barnham, K. W. J.; Lynch, M.; Tibbits, T.; Johnson, D.; Pakes, A.; Grantham, T.
2007-01-01
A touch-screen controlled, robot-loading system for the Thomas Swan 7×2 flip-top showerhead reactor has been developed. The reactor has been configured for the growth of GaAs and InP materials and has been used to prepare strain-balanced MQW (SBMQW) pin solar cell material on GaAs substrates. Both material characterisation and solar cell performance for SBMQW pin cells are described.
Valiant Load-Balancing: Building Networks That Can Support All Traffic Matrices
NASA Astrophysics Data System (ADS)
Zhang-Shen, Rui
This paper is a brief survey on how Valiant load-balancing (VLB) can be used to build networks that can efficiently and reliably support all traffic matrices. We discuss how to extend VLB to networks with heterogeneous capacities, how to protect against failures in a VLB network, and how to interconnect two VLB networks. For the readers' reference, included also is a list of work that uses VLB in various aspects of networking.
NASA Astrophysics Data System (ADS)
Hattori, Toshihiro; Takamatsu, Rieko
We calculated nitrogen balances on farm gate and soil surface on large-scale stock farms and discussed methods for reducing environmental nitrogen loads. Four different types of public stock farms (organic beef, calf supply and daily cows) were surveyed in Aomori Prefecture. (1) Farm gate and soil surface nitrogen inflows were both larger than the respective outflows on all types of farms. Farm gate nitrogen balance for beef farms were worse than that for dairy farms. (2) Soil surface nitrogen outflows and soil nitrogen retention were in proportion to soil surface nitrogen inflows. (3) Reductions in soil surface nitrogen retention were influenced by soil surface nitrogen inflows. (4) In order to reduce farm gate nitrogen retention, inflows of formula feed and chemical fertilizer need to be reduced. (5) In order to reduce soil surface nitrogen retention, inflows of fertilizer need to be reduced and nitrogen balance needs to be controlled.
Zemková, E; Štefániková, G; Muyor, J M
2016-08-01
This study investigates test-retest reliability and diagnostic accuracy of the load release balance test under four varied conditions. Young, early and late middle-aged physically active and sedentary subjects performed the test over 2 testing sessions spaced 1week apart while standing on either (1) a stable or (2) an unstable surface with (3) eyes open (EO) and (4) eyes closed (EC), respectively. Results identified that test-retest reliability of parameters of the load release balance test was good to excellent, with high values of ICC (0.78-0.92) and low SEM (7.1%-10.7%). The peak and the time to peak posterior center of pressure (CoP) displacement were significantly lower in physically active as compared to sedentary young adults (21.6% and 21.0%) and early middle-aged adults (22.0% and 20.9%) while standing on a foam surface with EO, and in late middle-aged adults on both unstable (25.6% and 24.5%) and stable support surfaces with EO (20.4% and 20.0%). The area under the ROC curve >0.80 for these variables indicates good discriminatory accuracy. Thus, these variables of the load release balance test measured under unstable conditions have the ability to differentiate between groups of physically active and sedentary adults as early as from 19years of age. PMID:27203382
Development of a two wheeled self balancing robot with speech recognition and navigation algorithm
NASA Astrophysics Data System (ADS)
Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh
2016-07-01
This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
NASA Astrophysics Data System (ADS)
Sakai, Yuji; Hukushima, Koji
2016-09-01
Recent numerical studies concerning simulated tempering algorithm without the detailed balance condition are reviewed and an irreversible simulated tempering algorithm based on the skew detailed balance condition is described. A method to estimate weight factors in simulated tempering by sequentially implementing the irreversible simulated tempering algorithm is studied in comparison with the conventional simulated tempering algorithm satisfying the detailed balance condition. It is found that the total amount of Monte Carlo steps for estimating the weight factors is successfully reduced by applying the proposed method to an two-dimensional ferromagnetic Ising model.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
An Evolutionary Algorithm for Improved Diversity in DSL Spectrum Balancing Solutions
NASA Astrophysics Data System (ADS)
Bezerra, Johelden; Klautau, Aldebaro; Monteiro, Marcio; Pelaes, Evaldo; Medeiros, Eduardo; Dortschy, Boris
2010-12-01
There are many spectrum balancing algorithms to combat the deleterious impact of crosstalk interference in digital subscriber lines (DSL) networks. These algorithms aim to find a unique operating point by optimizing the power spectral densities (PSDs) of the modems. Typically, the figure of merit of this optimization is the bit rate, power consumption or margin. This work poses and solves a different problem: instead of providing the solution for one specific operation point, it finds a set of operating points, each one corresponding to a distinct matrix with PSDs. This solution is useful for planning DSL deployment, for example, helping operators to conveniently evaluate their network capabilities and better plan their usage. The proposed method is based on a multiobjective formulation and implemented as an evolutionary genetic algorithm. Simulation results show that this algorithm achieves a better diversity among the operating points with lower computational cost when compared to an alternative approach.
Solar Load Voltage Tracking for Water Pumping: An Algorithm
NASA Astrophysics Data System (ADS)
Kappali, M.; Udayakumar, R. Y.
2014-07-01
Maximum power is to be harnessed from solar photovoltaic (PV) panel to minimize the effective cost of solar energy. This is accomplished by maximum power point tracking (MPPT). There are different methods to realise MPPT. This paper proposes a simple algorithm to implement MPPT lv method in a closed loop environment for centrifugal pump driven by brushed PMDC motor. Simulation testing of the algorithm is done and the results are found to be encouraging and supportive of the proposed method MPPT lv .
Leung, Dawn S. S.; Holmes, Andrew D.
2007-01-01
The balance function of children is known to be affected by carriage of a school backpack. Children with adolescent idiopathic scoliosis (AIS) tend to show poorer balance performance, and are typically treated by bracing, which further affects balance. The objective of this study is to examine the combined effects of school backpack carriage and bracing on girls with AIS. A force platform was used to record center of pressure (COP) motion in 20 schoolgirls undergoing thoraco-lumbar-sacral orthosis (TLSO brace) treatment for AIS. COP data were recorded with and without brace while carrying a backpack loaded at 0, 7.5, 10, 12.5 and 15% of the participant’s bodyweight (BW). Ten participants stood on a solid base and ten stood on a foam base, while all participants kept their eyes closed throughout. Sway parameters were analyzed by repeated measures ANOVA. No effect of bracing was found for the participants standing on the solid base, but wearing the brace significantly increased the sway area, displacement and medio-lateral amplitude in the participants standing on the foam base. The medio-lateral sway amplitude of participants standing on the solid base significantly increased with backpack load, whereas significant increases in antero-posterior sway amplitude, sway path length, sway area per second and short term diffusion coefficient were found in participants standing on the foam base. The poorer balance performance exhibited by participants with AIS when visual and somatosensory input is challenged appears to be exacerbated by wearing a TLSO brace, but no interactive effect between bracing and backpack loading was found. PMID:17340156
Load identification approach based on basis pursuit denoising algorithm
NASA Astrophysics Data System (ADS)
Ginsberg, D.; Ruby, M.; Fritzen, C. P.
2015-07-01
The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.
Energy balance in advanced audio coding encoder bit-distortion loop algorithm
NASA Astrophysics Data System (ADS)
Brzuchalski, Grzegorz; Pastuszak, Grzegorz
2013-10-01
The paper presents two techniques of balancing energy in ScaleFactor bands for Advanced Audio Coding. The techniques allows the AAC encoder to get a better audio quality. The first one modifies Scale Factors assigned to each band after the quantization whereas the second finds and changes offsets in the quantization - just before rounding down. The implementations of the algorithms have been tested and results discussed. Results show that these techniques significantly improve the quality. At last hardware implementation possibilities are discussed.
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
A Novel Control algorithm based DSTATCOM for Load Compensation
NASA Astrophysics Data System (ADS)
R, Sreejith; Pindoriya, Naran M.; Srinivasan, Babji
2015-11-01
Distribution Static Compensator (DSTATCOM) has been used as a custom power device for voltage regulation and load compensation in the distribution system. Controlling the switching angle has been the biggest challenge in DSTATCOM. Till date, Proportional Integral (PI) controller is widely used in practice for load compensation due to its simplicity and ability. However, PI Controller fails to perform satisfactorily under parameters variations, nonlinearities, etc. making it very challenging to arrive at best/optimal tuning values for different operating conditions. Fuzzy logic and neural network based controllers require extensive training and perform better under limited perturbations. Model predictive control (MPC) is a powerful control strategy, used in the petrochemical industry and its application has been spread to different fields. MPC can handle various constraints, incorporate system nonlinearities and utilizes the multivariate/univariate model information to provide an optimal control strategy. Though it finds its application extensively in chemical engineering, its utility in power systems is limited due to the high computational effort which is incompatible with the high sampling frequency in these systems. In this paper, we propose a DSTATCOM based on Finite Control Set Model Predictive Control (FCS-MPC) with Instantaneous Symmetrical Component Theory (ISCT) based reference current extraction is proposed for load compensation and Unity Power Factor (UPF) action in current control mode. The proposed controller performance is evaluated for a 3 phase, 3 wire, 415 V, 50 Hz distribution system in MATLAB Simulink which demonstrates its applicability in real life situations.
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture
NASA Astrophysics Data System (ADS)
Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea
2014-05-01
Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.
NASA Astrophysics Data System (ADS)
Gautam, Amit Kr.; Gautam, Ajay Kr.; Patel, R. B.
2010-11-01
In order to provide load balancing in clustered sensor deployment, the upstream clusters (near the BS) are kept smaller in size as compared to downstream ones (away from BS). Moreover, geographic awareness is also desirable in order to further enhance energy efficiency. But, this must be cost effective, since most of current location awareness strategies are either cost and weight inefficient (GPS) or are complex, inaccurate and unreliable in operation. This paper presents design and implementation of a Geographic LOad BALanced (GLOBAL) Clustering Protocol for Wireless Sensor Networks. A mathematical formulation is provided for determining the number of sensor nodes in each cluster. This enables uniform energy consumption after the multi-hop data transmission towards BS. Either the sensors can be manually deployed or the clusters be so formed that the sensor are efficiently distributed as per formulation. The latter strategy is elaborated in this contribution. Methods to provide static clustering and custom cluster sizes with location awareness are also provided in the given work. Finally, low mobility node applications can also implement the proposed work.
A load balancing bufferless deflection router for network-on-chip
NASA Astrophysics Data System (ADS)
Xiaofeng, Zhou; Zhangming, Zhu; Duan, Zhou
2016-07-01
The bufferless router emerges as an interesting option for cost-efficient in network-on-chip (NoC) design. However, the bufferless router only works well under low network load because deflection more easily occurs as the injection rate increases. In this paper, we propose a load balancing bufferless deflection router (LBBDR) for NoC that relieves the effect of deflection in bufferless NoC. The proposed LBBDR employs a balance toggle identifier in the source router to control the initial routing direction of X or Y for a flit in the network. Based on this mechanism, the flit is routed according to XY or YX routing in the network afterward. When two or more flits contend the same one desired output port a priority policy called nearer-first is used to address output ports allocation contention. Simulation results show that the proposed LBBDR yields an improvement of routing performance over the reported bufferless routing in the flit deflection rate, average packet latency and throughput by up to 13%, 10% and 6% respectively. The layout area and power consumption compared with the reported schemes are 12% and 7% less respectively. Project supported by the National Natural Science Foundation of China (Nos. 61474087, 61322405, 61376039).
NASA Astrophysics Data System (ADS)
Ghani Abro, Abdul; Mohamad-Saleh, Junita
2014-10-01
The prime motive of economic load dispatch (ELD) is to optimize the production cost of electrical power generation through appropriate division of load demand among online generating units. Bio-inspired optimization algorithms have outperformed classical techniques for optimizing the production cost. Probability-selection artificial bee colony (PS-ABC) algorithm is a recently proposed variant of ABC optimization algorithm. PS-ABC generates optimal solutions using three different mutation equations simultaneously. The results show improved performance of PS-ABC over the ABC algorithm. Nevertheless, all the mutation equations of PS-ABC are excessively self-reinforced and, hence, PS-ABC is prone to premature convergence. Therefore, this research work has replaced the mutation equations and has improved the scout-bee stage of PS-ABC for enhancing the algorithm's performance. The proposed algorithm has been compared with many ABC variants and numerous other optimization algorithms on benchmark functions and ELD test cases. The adapted ELD test cases comprise of transmission losses, multiple-fuel effect, valve-point effect and toxic gases emission constraints. The results reveal that the proposed algorithm has the best capability to yield the optimal solution for the problem among the compared algorithms.
Senay, Gabriel B.
2008-01-01
The main objective of this study is to present an improved modeling technique called Vegetation ET (VegET) that integrates commonly used water balance algorithms with remotely sensed Land Surface Phenology (LSP) parameter to conduct operational vegetation water balance modeling of rainfed systems at the LSP’s spatial scale using readily available global data sets. Evaluation of the VegET model was conducted using Flux Tower data and two-year simulation for the conterminous US. The VegET model is capable of estimating actual evapotranspiration (ETa) of rainfed crops and other vegetation types at the spatial resolution of the LSP on a daily basis, replacing the need to estimate crop- and region-specific crop coefficients.
Berg, Jonathan Charles; Halse, Chris; Crowther, Ashley; Barlas, Thanasis; Wilson, David Gerald; Berg, Dale E.; Resor, Brian Ray
2010-06-01
Prior work on active aerodynamic load control (AALC) of wind turbine blades has demonstrated that appropriate use of this technology has the potential to yield significant reductions in blade loads, leading to a decrease in wind cost of energy. While the general concept of AALC is usually discussed in the context of multiple sensors and active control devices (such as flaps) distributed over the length of the blade, most work to date has been limited to consideration of a single control device per blade with very basic Proportional Derivative controllers, due to limitations in the aeroservoelastic codes used to perform turbine simulations. This work utilizes a new aeroservoelastic code developed at Delft University of Technology to model the NREL/Upwind 5 MW wind turbine to investigate the relative advantage of utilizing multiple-device AALC. System identification techniques are used to identify the frequencies and shapes of turbine vibration modes, and these are used with modern control techniques to develop both Single-Input Single-Output (SISO) and Multiple-Input Multiple-Output (MIMO) LQR flap controllers. Comparison of simulation results with these controllers shows that the MIMO controller does yield some improvement over the SISO controller in fatigue load reduction, but additional improvement is possible with further refinement. In addition, a preliminary investigation shows that AALC has the potential to reduce off-axis gearbox loads, leading to reduced gearbox bearing fatigue damage and improved lifetimes.
A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2016-01-01
A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition.
PARALLEL IMPLEMENTATION OF THE TOPAZ OPACITY CODE: ISSUES IN LOAD-BALANCING
Sonnad, V; Iglesias, C A
2008-05-12
The TOPAZ opacity code explicitly includes configuration term structure in the calculation of bound-bound radiative transitions. This approach involves myriad spectral lines and requires the large computational capabilities of parallel processing computers. It is important, however, to make use of these resources efficiently. For example, an increase in the number of processors should yield a comparable reduction in computational time. This proportional 'speedup' indicates that very large problems can be addressed with massively parallel computers. Opacity codes can readily take advantage of parallel architecture since many intermediate calculations are independent. On the other hand, since the different tasks entail significantly disparate computational effort, load-balancing issues emerge so that parallel efficiency does not occur naturally. Several schemes to distribute the labor among processors are discussed.
Christen, Patrik; Schulte, Friederike A; Zwahlen, Alexander; van Rietbergen, Bert; Boutroy, Stephanie; Melton, L Joseph; Amin, Shreyasee; Khosla, Sundeep; Goldhahn, Jörg; Müller, Ralph
2016-01-01
A bone loading estimation algorithm was previously developed that provides in vivo loading conditions required for in vivo bone remodelling simulations. The algorithm derives a bone's loading history from its microstructure as assessed by high-resolution (HR) computed tomography (CT). This reverse engineering approach showed accurate and realistic results based on micro-CT and HR-peripheral quantitative CT images. However, its voxel size dependency, reproducibility and sensitivity still need to be investigated, which is the purpose of this study. Voxel size dependency was tested on cadaveric distal radii with micro-CT images scanned at 25 µm and downscaled to 50, 61, 75, 82, 100, 125 and 150 µm. Reproducibility was calculated with repeated in vitro as well as in vivo HR-pQCT measurements at 82 µm. Sensitivity was defined using HR-pQCT images from women with fracture versus non-fracture, and low versus high bone volume fraction, expecting similar and different loading histories, respectively. Our results indicate that the algorithm is voxel size independent within an average (maximum) error of 8.2% (32.9%) at 61 µm, but that the dependency increases considerably at voxel sizes bigger than 82 µm. In vitro and in vivo reproducibility are up to 4.5% and 10.2%, respectively, which is comparable to other in vitro studies and slightly higher than in other in vivo studies. Subjects with different bone volume fraction were clearly distinguished but not subjects with and without fracture. This is in agreement with bone adapting to customary loading but not to fall loads. We conclude that the in vivo bone loading estimation algorithm provides reproducible, sensitive and fairly voxel size independent results at up to 82 µm, but that smaller voxel sizes would be advantageous.
Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes
Garrett, W.R.
1981-08-04
A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Preferably the spring means itself is a double acting compression spring means wherein the same spring means is compressed whether the joint is extended or contracted. The damper has a like low spring rate over a considerable range of deflection, both upon extension and contraction of the joint, but a gradually then rapidly increased spring rate upon approaching the travel limits in each direction. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The spring rings make only such line contact with one of the telescoping members as is required for guidance therefrom, and no contact with the other member. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. Magnetic and electrical means are provided to check for the presence and condition of the lubricant. To increase load capacity the spring means is made of a number of components acting in parallel.
NASA Astrophysics Data System (ADS)
Alfredsen, K. T.; Killingtveit, A.
2011-12-01
About 99% of the total energy production in Norway comes from hydropower, and the total production of about 120 TWh makes Norway Europe's largest hydropower producer. Most hydropower systems in Norway are based on high-head plants with mountain storage reservoirs and tunnels transporting water from the reservoirs to the power plants. In total, Norwegian reservoirs contributes around 50% of the total energy storage capacity in Europe. Current strategies to reduce emission of greenhouse gases from energy production involve increased focus on renewable energy sources, e.g. the European Union's 202020 goal in which renewable energy sources should be 20% of the total energy production by 2020. To meet this goal new renewable energy installations must be developed on a large scale in the coming years, and wind power is the main focus for new developments. Hydropower can contribute directly to increase renewable energy through new development or extensions to existing systems, but maybe even more important is the potential to use hydropower systems with storage for load balancing in a system with increased amount of non-storable renewable energies. Even if new storage technologies are under development, hydro storage is the only technology available on a large scale and the most economical feasible alternative. In this respect the Norwegian system has a high potential both through direct use of existing reservoirs and through an increased development of pump storage plants utilizing surplus wind energy to pump water and then producing during periods with low wind input. Through cables to Europe, Norwegian hydropower could also provide balance power for the North European market. Increased peaking and more variable operation of the current hydropower system will present a number of technical and environmental challenges that needs to be identified and mitigated. A more variable production will lead to fluctuating flow in receiving rivers and reservoirs, and it will also
Food composition and acid-base balance: alimentary alkali depletion and acid load in herbivores.
Kiwull-Schöne, Heidrun; Kiwull, Peter; Manz, Friedrich; Kalhoff, Hermann
2008-02-01
Alkali-enriched diets are recommended for humans to diminish the net acid load of their usual diet. In contrast, herbivores have to deal with a high dietary alkali impact on acid-base balance. Here we explore the role of nutritional alkali in experimentally induced chronic metabolic acidosis. Data were collected from healthy male adult rabbits kept in metabolism cages to obtain 24-h urine and arterial blood samples. Randomized groups consumed rabbit diets ad libitum, providing sufficient energy but variable alkali load. One subgroup (n = 10) received high-alkali food and approximately 15 mEq/kg ammonium chloride (NH4Cl) with its drinking water for 5 d. Another group (n = 14) was fed low-alkali food for 5 d and given approximately 4 mEq/kg NH4Cl daily for the last 2 d. The wide range of alimentary acid-base load was significantly reflected by renal base excretion, but normal acid-base conditions were maintained in the arterial blood. In rabbits fed a high-alkali diet, the excreted alkaline urine (pH(u) > 8.0) typically contained a large amount of precipitated carbonate, whereas in rabbits fed a low-alkali diet, both pH(u) and precipitate decreased considerably. During high-alkali feeding, application of NH4Cl likewise decreased pH(u), but arterial pH was still maintained with no indication of metabolic acidosis. During low-alkali feeding, a comparably small amount of added NH4Cl further lowered pH(u) and was accompanied by a significant systemic metabolic acidosis. We conclude that exhausted renal base-saving function by dietary alkali depletion is a prerequisite for growing susceptibility to NH4Cl-induced chronic metabolic acidosis in the herbivore rabbit.
Meta-heuristic algorithm to solve two-sided assembly line balancing problems
NASA Astrophysics Data System (ADS)
Wirawan, A. D.; Maruf, A.
2016-02-01
Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.
Gong, Li-gang; Yang, Wen-lun
2014-01-01
Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555
NASA Astrophysics Data System (ADS)
Becciani, U.; Ansaloni, R.; Antonuccio-Delogu, V.; Erbacci, G.; Gambera, M.; Pagliaro, A.
1997-10-01
N-body algorithms for long-range unscreened interactions like gravity belong to a class of highly irregular problems whose optimal solution is a challenging task for present-day massively parallel computers. In this paper we describe a strategy for optimal memory and work distribution which we have applied to our parallel implementation of the Barnes & Hut (1986) recursive tree scheme on a Cray T3D using the CRAFT programming environment. We have performed a series of tests to find an optimal data distribution in the T3D memory, and to identify a strategy for the Dynamic Load Balance in order to obtain good performances when running large simulations (more than 10 million particles). The results of tests show that the step duration depends on two main factors: the data locality and the T3D network contention. Increasing data locality we are able to minimize the step duration if the closest bodies (direct interaction) tend to be located in the same PE local memory (contiguous block subdivision, high granularity), whereas the tree properties have a fine grain distribution. In a very large simulation, due to network contention, an unbalanced load arises. To remedy this we have devised an automatic work redistribution mechanism which provided a good Dynamic Load Balance at the price of an insignificant overhead.
Physics Based Model for Cryogenic Chilldown and Loading. Part I: Algorithm
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Smelyanskiy, Vadim N.; Brown, Barbara
2014-01-01
We report the progress in the development of the physics based model for cryogenic chilldown and loading. The chilldown and loading is model as fully separated non-equilibrium two-phase flow of cryogenic fluid thermally coupled to the pipe walls. The solution follow closely nearly-implicit and semi-implicit algorithms developed for autonomous control of thermal-hydraulic systems developed by Idaho National Laboratory. A special attention is paid to the treatment of instabilities. The model is applied to the analysis of chilldown in rapid loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The numerical predictions are in reasonable agreement with the experimental time traces. The obtained results pave the way to the development of autonomous loading operation on the ground and space.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Fawley, William M.
2002-07-01
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
A modified surface energy balance algorithm for land (M-SEBAL) based on a trapezoidal framework
NASA Astrophysics Data System (ADS)
Long, Di; Singh, Vijay P.
2012-02-01
The surface energy balance algorithm for land (SEBAL) has been designed and widely used (and misused) worldwide to estimate evapotranspiration across varying spatial and temporal scales using satellite remote sensing over the past 15 yr. It is, however, beset by visual identification of a hot and cold pixel to determine the temperature difference (dT) between the surface and the lower atmosphere, which is assumed to be linearly correlated with surface radiative temperature (Trad) throughout a scene. To reduce ambiguity in flux estimation by SEBAL due to the subjectivity in extreme pixel selection, this study first demonstrates that SEBAL is of a rectangular framework of the contextual relationship between vegetation fraction (fc) and Trad, which can distort the spatial distribution of heat flux retrievals to varying degrees. End members of SEBAL were replaced by a trapezoidal framework of the fc-Trad space in the modified surface energy balance algorithm for land (M-SEBAL). The warm edge of the trapezoidal framework is determined by analytically deriving temperatures of the bare surface with the largest water stress and the fully vegetated surface with the largest water stress implicit in both energy balance and radiation budget equations. Areally averaged air temperature (Ta) across a study site is taken to be the cold edge of the trapezoidal framework. Coefficients of the linear relationship between dT and Trad can vary with fc but are assumed essentially invariant for the same fc or within the same fc class in M-SEBAL. SEBAL and M-SEBAL are applied to the soil moisture-atmosphere coupling experiment (SMACEX) site in central Iowa, U.S. Results show that M-SEBAL is capable of reproducing latent heat flux in terms of an overall root-mean-square difference of 41.1 W m-2 and mean absolute percentage difference of 8.9% with reference to eddy covariance tower-based measurements for three landsat thematic mapper/enhanced thematic mapper plus imagery acquisition dates in
A balancing act of the brain: activations and deactivations driven by cognitive load
Arsalidou, Marie; Pascual-Leone, Juan; Johnson, Janice; Morris, Drew; Taylor, Margot J
2013-01-01
The majority of neuroimaging studies focus on brain activity during performance of cognitive tasks; however, some studies focus on brain areas that activate in the absence of a task. Despite the surge of research comparing these contrasted areas of brain function, their interrelation is not well understood. We systematically manipulated cognitive load in a working memory task to examine concurrently the relation between activity elicited by the task versus activity during control conditions. We presented adults with six levels of task demand, and compared those with three conditions without a task. Using whole-brain analysis, we found positive linear relations between cortical activity and task difficulty in areas including middle frontal gyrus and dorsal cingulate; negative linear relations were found in medial frontal gyrus and posterior cingulate. These findings demonstrated balancing of activation patterns between two mental processes, which were both modulated by task difficulty. Frontal areas followed a graded pattern more closely than other regions. These data also showed that working memory has limited capacity in adults: an upper bound of seven items and a lower bound of four items. Overall, working memory and default-mode processes, when studied concurrently, reveal mutually competing activation patterns. PMID:23785659
Drill string splined resilient tubular telescopic joint for balanced load drilling of deep holes
Garrett, W.R.
1984-03-06
A drill string splined resilient tubular telescopic joint for balanced load deep well drilling comprises a double acting damper having a very low spring rate upon both extension and contraction from the zero deflection condition. Stacks of spring rings are employed for the spring means, the rings being either shaped elastomer-metal sandwiches or, preferably, roller Belleville springs. The spline and spring means are disposed in an annular chamber formed by mandrel and barrel members constituting the telescopic joint. The chamber containing the spring means, and also containing the spline means, is filled with lubricant, the chamber being sealed with a pressure seal at its lower end and an inverted floating seal at its upper end. A prototype includes of this a bellows seal instead of the floating seal at the upper end of the tool, and a bellows in the side of the lubricant chamber provides volume compensation. A second lubricant chamber is provided below the pressure seal, the lower end of the second chamber being closed by a bellows seal and a further bellows in the side of the second chamber providing volume compensation. Modifications provide hydraulic jars.
Optimization of laminated stacking sequence for buckling load maximization by genetic algorithm
NASA Technical Reports Server (NTRS)
Le Riche, Rodolphe; Haftka, Raphael T.
1992-01-01
The use of a genetic algorithm to optimize the stacking sequence of a composite laminate for buckling load maximization is studied. Various genetic parameters including the population size, the probability of mutation, and the probability of crossover are optimized by numerical experiments. A new genetic operator - permutation - is proposed and shown to be effective in reducing the cost of the genetic search. Results are obtained for a graphite-epoxy plate, first when only the buckling load is considered, and then when constraints on ply contiguity and strain failure are added. The influence on the genetic search of the penalty parameter enforcing the contiguity constraint is studied. The advantage of the genetic algorithm in producing several near-optimal designs is discussed.
The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis
Siegel, A.R.; Smith, K.; Romano, P.K.; Forget, B.; Felker, K.
2013-02-15
A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes.
NASA Astrophysics Data System (ADS)
Bunnoon, Pituk; Chalermyanont, Kusumal; Limsakul, Chusak
2010-02-01
This paper proposed the discrete transform and neural network algorithms to obtain the monthly peak load demand in mid term load forecasting. The mother wavelet daubechies2 (db2) is employed to decomposed, high pass filter and low pass filter signals from the original signal before using feed forward back propagation neural network to determine the forecasting results. The historical data records in 1997-2007 of Electricity Generating Authority of Thailand (EGAT) is used as reference. In this study, historical information of peak load demand(MW), mean temperature(Tmean), consumer price index (CPI), and industrial index (economic:IDI) are used as feature inputs of the network. The experimental results show that the Mean Absolute Percentage Error (MAPE) is approximately 4.32%. This forecasting results can be used for fuel planning and unit commitment of the power system in the future.
NASA Astrophysics Data System (ADS)
Pitakaso, Rapeepan; Sethanan, Kanchana
2016-02-01
This article proposes the differential evolution algorithm (DE) and the modified differential evolution algorithm (DE-C) to solve a simple assembly line balancing problem type 1 (SALBP-1) and SALBP-1 when the maximum number of machine types in a workstation is considered (SALBP-1M). The proposed algorithms are tested and compared with existing effective heuristics using various sets of test instances found in the literature. The computational results show that the proposed heuristics is one of the best methods, compared with the other approaches.
Cain, Stephen M; McGinnis, Ryan S; Davidson, Steven P; Vitali, Rachel V; Perkins, Noel C; McLean, Scott G
2016-01-01
We utilize an array of wireless inertial measurement units (IMUs) to measure the movements of subjects (n=30) traversing an outdoor balance beam (zigzag and sloping) as quickly as possible both with and without load (20.5kg). Our objectives are: (1) to use IMU array data to calculate metrics that quantify performance (speed and stability) and (2) to investigate the effects of load on performance. We hypothesize that added load significantly decreases subject speed yet results in increased stability of subject movements. We propose and evaluate five performance metrics: (1) time to cross beam (less time=more speed), (2) percentage of total time spent in double support (more double support time=more stable), (3) stride duration (longer stride duration=more stable), (4) ratio of sacrum M-L to A-P acceleration (lower ratio=less lateral balance corrections=more stable), and (5) M-L torso range of motion (smaller range of motion=less balance corrections=more stable). We find that the total time to cross the beam increases with load (t=4.85, p<0.001). Stability metrics also change significantly with load, all indicating increased stability. In particular, double support time increases (t=6.04, p<0.001), stride duration increases (t=3.436, p=0.002), the ratio of sacrum acceleration RMS decreases (t=-5.56, p<0.001), and the M-L torso lean range of motion decreases (t=-2.82, p=0.009). Overall, the IMU array successfully measures subject movement and gait parameters that reveal the trade-off between speed and stability in this highly dynamic balance task. PMID:26669954
NASA Astrophysics Data System (ADS)
Esin, S. B.; Trifonov, N. N.; Sukhorukov, Yu. G.; Yurchenko, A. Yu.; Grigor'eva, E. B.; Snegin, I. P.; Zhivykh, D. A.; Medvedkin, A. V.; Ryabich, V. A.
2015-09-01
More than 30 power units of thermal power stations, based on the nondeaerating heat balance diagram, successfully operate in the former Soviet Union. Most of them are power units with a power of 300 MW, equipped with HTGZ and LMZ turbines. They operate according to a variable electric load curve characterized by deep reductions when undergoing night minimums. Additional extension of the range of power unit adjustment makes it possible to maintain the dispatch load curve and obtain profit for the electric power plant. The objective of this research is to carry out estimated and experimental processing of the operating regimes of the regeneration system of steam-turbine plants within the extended adjustment range and under the conditions when the constraints on the regeneration system and its equipment are removed. Constraints concerning the heat balance diagram that reduce the power unit efficiency when extending the adjustment range have been considered. Test results are presented for the nondeaerating heat balance diagram with the HTGZ turbine. Turbine pump and feed electric pump operation was studied at a power unit load of 120-300 MW. The reliability of feed pump operation is confirmed by a stable vibratory condition and the absence of cavitation noise and vibration at a frequency that characterizes the cavitation condition, as well as by oil temperature maintenance after bearings within normal limits. Cavitation performance of pumps in the studied range of their operation has been determined. Technical solutions are proposed on providing a profitable and stable operation of regeneration systems when extending the range of adjustment of power unit load. A nondeaerating diagram of high-pressure preheater (HPP) condensate discharge to the mixer. A regeneration system has been developed and studied on the operating power unit fitted with a deaeratorless thermal circuit of the system for removing the high-pressure preheater heating steam condensate to the mixer
NASA Astrophysics Data System (ADS)
Sohrabi, Foad; Davidson, Timothy N.
2016-06-01
We consider the problem of power allocation for the single-cell multi-user (MU) multiple-input single-output (MISO) downlink with quality-of-service (QoS) constraints. The base station acquires an estimate of the channels and, for a given beamforming structure, designs the power allocation so as to minimize the total transmission power required to ensure that target signal-to-interference-and-noise ratios at the receivers are met, subject to a specified outage probability. We consider scenarios in which the errors in the base station's channel estimates can be modelled as being zero-mean and Gaussian. Such a model is particularly suitable for time division duplex (TDD) systems with quasi-static channels, in which the base station estimates the channel during the uplink phase. Under that model, we employ a precise deterministic characterization of the outage probability to transform the chance-constrained formulation to a deterministic one. Although that deterministic formulation is not convex, we develop a coordinate descent algorithm that can be shown to converge to a globally optimal solution when the starting point is feasible. Insight into the structure of the deterministic formulation yields approximations that result in coordinate update algorithms with good performance and significantly lower computational cost. The proposed algorithms provide better performance than existing robust power loading algorithms that are based on tractable conservative approximations, and can even provide better performance than robust precoding algorithms based on such approximations.
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
Soewono, C. N.; Takaki, N.
2012-07-01
In this work genetic algorithm was proposed to solve fuel loading pattern optimization problem in thorium fueled heavy water reactor. The objective function of optimization was to maximize the conversion ratio and minimize power peaking factor. Those objectives were simultaneously optimized using non-dominated Pareto-based population ranking optimal method. Members of non-dominated population were assigned selection probabilities based on their rankings in a manner similar to Baker's single criterion ranking selection procedure. A selected non-dominated member was bred through simple mutation or one-point crossover process to produce a new member. The genetic algorithm program was developed in FORTRAN 90 while neutronic calculation and analysis was done by COREBN code, a module of core burn-up calculation for SRAC. (authors)
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
DESIGN NOTE: A low interaction two-axis wind tunnel force balance designed for large off-axis loads
NASA Astrophysics Data System (ADS)
Ostafichuk, Peter M.; Green, Sheldon I.
2002-10-01
A novel two-axis wind tunnel force balance using air bushings for off-axis load compensation has been developed. The design offers a compact, robust, and versatile option for precisely measuring horizontal force components irrespective of vertical and moment loads. Two independent stages of cylindrical bushings support large moments and vertical force; there is low interaction due to the minimal friction along the horizontal measurement axes. The current design measures drag and side forces up to 70 N and can safely operate in the presence of vertical loads as large as 2200 N and moment loads up to 425, 750, and 425 N m in roll, pitch, and yaw, respectively. Eleven drag axis calibration trials were conducted with a variety of applied vertical forces and pitching moments. The individual linear calibration slopes for the trials agreed to within 0.18% and the largest residual from all calibrations was 0.38% of full scale. As the residuals were found to obey a normal distribution, with 99% certainty the expected drag resolution of the device is better than 0.30% of full scale, independent of off-axis loads.
Toole, Tonya; Maitland, Charles G; Warren, Earl; Hubmann, Monica F; Panton, Lynn
2005-01-01
Our study aims were: 1) to determine whether assisted weight bearing or additional weight bearing is more beneficial to the improvement of function and increased stability in gait and dynamic balance in patients with Parkinsonism, compared with matched controls (treadmill alone). Twenty-three men and women participants (M +/- SD = 74.5 +/- 9.7 yrs; Males = 19, Females = 4) with Parkinsonism were in the study. Participants staged at 1-7 (M +/- SD = 3.96 +/- 1.07) using the Hoehn & Yahr scale. All participants were tested before, after the intervention (within one week), and four weeks later on: 1) dynamic posturography, 2) Berg Balance scale, 3) United Parkinson's Disease Rating Scale (UPDRS), 4) biomechanical assessment of strength and range of motion, and 5) Gaitrite force sensitive gait mat. Group 1 (treadmill control group), received treadmill training with no loading or unloading. Group 2 (unweighted group), walked on the treadmill assisted by the Biodex Unweighing System at a 25% body weight reduction. Group 3 (weighted group), ambulated wearing a weighted scuba-diving belt, which increased their normal body weight by 5%. All subjects walked on the treadmill for 20 minutes per day for 3 days per week for 6 weeks. Improvements in dynamic posturography, falls during balance testing, Berg Balance, UPDRS (Motor Exam), and gait for all groups lead us to believe that neuromuscular regulation can be facilitated in all Parkinson's individuals no matter what treadmill intervention is employed.
Estimating sediment loads in an intra-Apennine catchments: balance between modeling and monitoring
NASA Astrophysics Data System (ADS)
Pelacani, Samanta; Cassi, Paola; Borselli, Lorenzo
2010-05-01
In this study we compare the results of a soil erosion model applied at watershed scale to the suspended sediment measured in the stream network affected by a motor way construction. A sediment delivery model is applied at watershed scale; the evaluation of sediment delivery is related to a connectivity fluxes index that describes the internal linkages between runoff and sediment sources in upper parts of catchments and the receiving sinks. An analysis of the fine suspended sediment transport and storage was conducted for a streams inlet of the Bilancino reservoir, a principal water supply of the city of Florence. The suspended sediment were collected from a section of river defined as a close systems using a time integrating suspended sediment sampling. The sediment deposited within the sampling traps was recovered after storm events and provide information of the overall contribution of the potential sediment sources. Hillslope gross erosion was assessed by a USLE-TYPE approach. A soil survey at 1:25.000 scale and a soil database was create to calculate, for each soil unit, the erodibility coefficient K using a new algorithm (Salvador Sanchis et al. 2007). Erosivity coefficient R was obtained applying geostatistical methods taking into account elevation and valley morphology. Furthermore, we evaluate a sediment delivery factor (SDR) for the entire watershed. This factor is used to correct the output of the USLE Type model. The innovative approach consist in a SDR factor variable in space and in time because it is related to a fluxes connectivity index IC (Borselli et al. 2008) based on the distribution of land use and topographic features. The aim of this study is to understand how the model simulates the real processes that intervene in the watershed and subsequently to calibrate the model with the result obtained from the monitoring of suspend sediment in the streams. From first results, it appears that human activities by highway construction, have resulted in
NASA Astrophysics Data System (ADS)
Williams, Mark R.; King, Kevin W.; Macrae, Merrin L.; Ford, William; Van Esbroeck, Chris; Brunke, Richard I.; English, Michael C.; Schiff, Sherry L.
2015-11-01
Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of nutrient load estimates in large, naturally drained watersheds, few studies have focused on tile-drained fields and small tile-drained headwater watersheds. The objective of this study was to quantify uncertainty in annual dissolved reactive phosphorus (DRP) and nitrate-nitrogen (NO3-N) load estimates from four tile-drained fields and two small tile-drained headwater watersheds in Ohio, USA and Ontario, Canada. High temporal resolution datasets of discharge (10-30 min) and nutrient concentration (2 h to 1 d) were collected over a 1-2 year period at each site and used to calculate a reference nutrient load. Monte Carlo simulations were used to subsample the measured data to assess the effects of sample frequency, calculation algorithm, and compositing strategy on the uncertainty of load estimates. Results showed that uncertainty in annual DRP and NO3-N load estimates was influenced by both the sampling interval and the load estimation algorithm. Uncertainty in annual nutrient load estimates increased with increasing sampling interval for all of the load estimation algorithms tested. Continuous discharge measurements and linear interpolation of nutrient concentrations yielded the least amount of uncertainty, but still tended to underestimate the reference load. Compositing strategies generally improved the precision of load estimates compared to discrete grab samples; however, they often reduced the accuracy. Based on the results of this study, we recommended that nutrient concentration be measured every 13-26 h for DRP and every 2.7-17.5 d for NO3-N in tile-drained fields and small tile-drained headwater watersheds to accurately (±10%) estimate annual loads.
NASA Astrophysics Data System (ADS)
Muraleedharan, Rajani
2011-06-01
The future of metering networks requires adaptation of different sensor technology while reducing energy exploitation. In this paper, a routing protocol with the ability to adapt and communicate reliably over varied IEEE standards is proposed. Due to sensor's resource constraints, such as memory, energy, processing power an algorithm that balances resources without compromising performance is preferred. The proposed A-PEARL protocol is tested under harsh simulated scenarios such as sensor failure and fading conditions. The inherent features of A-PEARL protocol such as data aggregation, fusion and channel hopping enables minimal resource consumption and secure communication.
Knowing Your Market: How Working Students Balance Work, Grades and Course Load.
ERIC Educational Resources Information Center
Henke, John W., Jr.; And Others
1993-01-01
A study of 128 college students found that working students use sophisticated methods for balancing their grades, courseload, and workload. It is proposed that, if colleges and universities understand how this happens, they can serve this important market segment better and benefit from more positive student perceptions of the institution. (MSE)
NASA Technical Reports Server (NTRS)
Woods, Claudia M.; Brewe, David E.
1988-01-01
A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.
NASA Technical Reports Server (NTRS)
Woods, C. M.; Brewe, D. E.
1989-01-01
A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.
NASA Astrophysics Data System (ADS)
Koloch, Grzegorz; Kaminski, Bogumil
2010-10-01
In the paper we examine a modification of the classical Vehicle Routing Problem (VRP) in which shapes of transported cargo are accounted for. This problem, known as a three-dimensional VRP with loading constraints (3D-VRP), is appropriate when transported commodities are not perfectly divisible, but they have fixed and heterogeneous dimensions. In the paper restrictions on allowable cargo positionings are also considered. These restrictions are derived from business practice and they extended the baseline 3D-VRP formulation as considered by Koloch and Kaminski (2010). In particular, we investigate how additional restrictions influence relative performance of two proposed optimization algorithms: the nested and the joint one. Performance of both methods is compared on artificial problems and on a big-scale real life case study.
Chen, Yousu; Huang, Zhenyu; Rice, Mark J.
2012-12-27
Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques hold the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.
Zimmermann, Frauke; Schwenninger, Christoph; Nolten, Ulrich; Firmbach, Franz Peter; Elfring, Robert; Radermacher, Klaus
2012-08-01
Preservation and recovery of the mechanical leg axis as well as good rotational alignment of the prosthesis components and well-balanced ligaments are essential for the longevity of total knee arthroplasty (TKA). In the framework of the OrthoMIT project, the genALIGN system, a new navigated implantation approach based on intra-operative force-torque measurements, has been developed. With this system, optical or magnetic position tracking as well as any fixation of invasive rigid bodies are no longer necessary. For the alignment of the femoral component along the mechanical axis, a sensor-integrated instrument measures the torques resulting from the deviation between the instrument's axis and the mechanical axis under manually applied axial compression load. When both axes are coaxial, the resulting torques equal zero, and the tool axis can be fixed with respect to the bone. For ligament balancing and rotational alignment of the femoral component, the genALIGN system comprises a sensor-integrated tibial trial inlay measuring the amplitude and application points of the forces transferred between femur and tibia. Hereby, the impact of ligament tensions on knee joint loads can be determined over the whole range of motion. First studies with the genALIGN system, including a comparison with an imageless navigation system, show the feasibility of the concept. PMID:22868781
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
NASA Astrophysics Data System (ADS)
Chen, M.; Senay, G. B.; Verdin, J. P.; Rowland, J.
2014-12-01
Current regional to global and daily to annual Evapotranspiration ( ET) estimation mainly relies on surface energy balance (SEB) ET models or statistical empirical methods driven by remote sensing data and various meteorology databases. However, these ET models face challenging issues—large uncertainties from inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at globally available FLUXNET tower sites provide a feasible opportunity to assess the ET modelling uncertainties. In this study, we focused on uncertainty analysis on an operational simplified surface energy balance (SSEBop) algorithm for ET estimation at multiple Ameriflux tower sites with diverse land cover characteristics and climatic conditions. The input land surface temperature (LST) data of the algorithm were adopted from the 8-day composite1-km Moderate Resolution Imaging Spectroradiometer (MODIS) land surface temperature. The other input data were taken from the Ameriflux database. Results of statistical analysis indicated that uncertainties or random errors from input variables and parameters of SSEBop led to daily and seasonal ET estimates with relative errors around 20% across multiple flux tower sites distributed across different biomes. This uncertainty of SSEBop lies in the error range of 20-30% of similar SEB-based ET algorithms, such as, Surface Energy Balance System and Surface Energy Balance Algorithm for Land. The R2 between daily and seasonal ET estimates by SSEBop and ET eddy covariance measurements at multiple Ameriflux tower sites exceed 0.7, and even up to 0.9 for croplands, grasslands, and forests, suggesting systematic error or bias of the SSEBop is acceptable. In summary, the uncertainty assessment verifies that the SSEBop is a reliable method for wide-area ET calculation and especially useful for detecting drought years and relative drought severity for agricultural production
Load Balanced Scalable Byzantine Agreement through Quorum Building, with Full Information
NASA Astrophysics Data System (ADS)
King, Valerie; Lonargan, Steven; Saia, Jared; Trehan, Amitabh
We address the problem of designing distributed algorithms for large scale networks that are robust to Byzantine faults. We consider a message passing, full information model: the adversary is malicious, controls a constant fraction of processors, and can view all messages in a round before sending out its own messages for that round. Furthermore, each bad processor may send an unlimited number of messages. The only constraint on the adversary is that it must choose its corrupt processors at the start, without knowledge of the processors' private random bits.
NASA Astrophysics Data System (ADS)
Tsujimoto, Takehiro; Kita, Hiroyuki; Tanaka, Eiichi; Toyama, Atsushi; Hasegawa, Jun
Recently, basic framework of electric power systems has been changed greatly by deregulation of electric power industry. Independent Power Producers (IPPs) or Power Producer and Suppliers (PPSs) as new entrants are increasing in the power generation division. For power system stability, conventional electric utilities and PPSs have to take a balance between supply and demand. More specifically, PPSs maintain the difference between energy supplied by generators and energy consumed by demand within fluctuation range of the 30-minute balancing rule and, general electric utilities eliminate its imbalance in the whole power system. This paper investigates PPSs' effects of retail wheeling from both sides of PPSs and a general electric utility. First, from the point of PPSs, it presents a control method of generators under the condition where PPSs produce electric power economically. And from the point of a general electric utility, it evaluates the generation capacity for load frequency control as effect of retail wheeling on the frequency control. The validity of the proposed technique and influence evaluation to the frequency control are shown by the simulation which MATLAB/Simulink is used for.
D'Amico, Moreno; Ciarrocca, Francesca; Liscio, Grazia; Serafini, Paolo; Tommasini, Maura; Vallasciani, Massimo
2006-01-01
Following total hip joint replacement (THJR), the durability of a prosthesis is limited by: wearing of frictional surfaces and loosening and migration of the prosthesis-cement-bone system. Literature review witnesses biomechanical studies focused mainly/only on hip functional state while none of them approached leg length discrepancy (LLD), posture unbalancing or spine related problems after THJR. Conversely, these latter could be critical elements for surgery and rehabilitation success, given the possible induction of asymmetric loading patterns. This study presents the results obtained by using a recently proposed methodology, to measure 3D subject posture balance and spine morphology and to evaluate its usefulness in individual therapy tuning/follow up. 3D subject's posture has been measured by means of 3D opto-electronic device, force platform and baropodography. 90 subjects after THJR have been included in this study. The subjects have been evaluated in two different epochs: 3 weeks after surgical intervention and after 3 months. 77/90 patients presented a LLD, pelvic obliquity and posture unbalancing. More than 90% of this group showed an overall postural re-balancing induced by the use of simple underfoot wedge. 70/77 patients needed wedge under the healthy side showing the surgical intervention produced a leg lengthening. 60/90 (52 LLD) patients underwent up to now to control and patients who wore the suggested wedge (63.4%) presented an improvement over all the considered quantitative parameters. Patients who wore a shorter than suggested wedge (23.1%), or that did not wear the suggested wedge (13.5%) presented a moderate or significant worsening of their postural balancing respectively.
Effects of nutrient loading on the carbon balance of coastal wetland sediments
Morris, J.T.; Bradley, P.M.
1999-01-01
Results of a 12-yr study in an oligotrophic South Carolina salt marsh demonstrate that soil respiration increased by 795 g C m-2 yr-1 and that carbon inventories decreased in sediments fertilized with nitrogen and phosphorus. Fertilized plots became net sources of carbon to the atmosphere, and sediment respiration continues in these plots at an accelerated pace. After 12 yr of treatment, soil macroorganic matter in the top 5 cm of sediment was 475 g C m-2 lower in fertilized plots than in controls, which is equivalent to a constant loss rate of 40 g C m-2 yr-1. It is not known whether soil carbon in fertilized plots has reached a new equilibrium or continues to decline. The increase in soil respiration in the fertilized plots was far greater than the loss of sediment organic matter, which indicates that the increase in soil respiration was largely due to an increase in primary production. Sediment respiration in laboratory incubations also demonstrated positive effects of nutrients. Thus, the results indicate that increased nutrient loading of oligotrophic wetlands can lead to an increased rate of sediment carbon turnover and a net loss of carbon from sediments.
NASA Astrophysics Data System (ADS)
Brooks, D. R.; Harrison, E. F.; Minnis, P.; Suttles, J. T.; Kandel, R. S.
1986-05-01
A brief description is given of how temporal and spatial variability in the earth's radiative behavior influences the goals of satellite radiation monitoring systems and how some previous systems have addressed the existing problems. Then, results of some simulations of radiation budget monitoring missions are presented. These studies led to the design of the Earth Radiation Budget Experiment (ERBE). A description is given of the temporal and spatial averaging algorithms developed for the ERBE data analysis. These algorithms are intended primarily to produce monthly averages of the net radiant exitance on regional, zonal, and global scales and to provide insight into the regional diurnal variability of radiative parameters such as albedo and long-wave radiant exitance. The algorithms are applied to scanner and nonscanner data for up to three satellites. Modeling of dialy shortwave albedo and radiant exitance with satellite samling that is insufficient to fully account for changing meteorology is discussed in detail. Studies performed during the ERBE mission and software design are reviewed. These studies provide quantitative estimates of the effects of temporally sparse and biased sampling on inferred diurnal and regional radiative parameters. Other topics covered include long-wave diurnal modeling, extraction of a regional monthly net clear-sky radiation budget, the statistical significance of observed diurnal variability, quality control of the analysis, and proposals for validating the results of ERBE time and space averaging.
Jumpertz, Reiner; Le, Duc Son; Turnbaugh, Peter J; Trinidad, Cathy; Bogardus, Clifton; Gordon, Jeffrey I; Krakoff, Jonathan
2011-01-01
Background: Studies in mice indicate that the gut microbiome influences both sides of the energy-balance equation by contributing to nutrient absorption and regulating host genes that affect adiposity. However, it remains uncertain as to what extent gut microbiota are an important regulator of nutrient absorption in humans. Objective: With the use of a carefully monitored inpatient study cohort, we tested how gut bacterial community structure is affected by altering the nutrient load in lean and obese individuals and whether their microbiota are correlated with the efficiency of dietary energy harvest. Design: We investigated dynamic changes of gut microbiota during diets that varied in caloric content (2400 compared with 3400 kcal/d) by pyrosequencing bacterial 16S ribosomal RNA (rRNA) genes present in the feces of 12 lean and 9 obese individuals and by measuring ingested and stool calories with the use of bomb calorimetry. Results: The alteration of the nutrient load induced rapid changes in the gut microbiota. These changes were directly correlated with stool energy loss in lean individuals such that a 20% increase in Firmicutes and a corresponding decrease in Bacteroidetes were associated with an increased energy harvest of ≈150 kcal. A high degree of overfeeding in lean individuals was accompanied by a greater fractional decrease in stool energy loss. Conclusions: These results show that the nutrient load is a key variable that can influence the gut (fecal) bacterial community structure over short time scales. Furthermore, the observed associations between gut microbes and nutrient absorption indicate a possible role of the human gut microbiota in the regulation of the nutrient harvest. This trial was registered at clinicaltrials.gov as NCT00414063. PMID:21543530
NASA Astrophysics Data System (ADS)
Bhattarai, Nishan
The flow of water and energy fluxes at the Earth's surface and within the climate system is difficult to quantify. Recent advances in remote sensing technologies have provided scientists with a useful means to improve characterization of these complex processes. However, many challenges remain that limit our ability to optimize remote sensing data in determining evapotranspiration (ET) and energy fluxes. For example, periodic cloud cover limits the operational use of remotely sensed data from passive sensors in monitoring seasonal fluxes. Additionally, there are many remote sensing-based single-source surface energy balance (SEB) models, but no clear guidance on which one to use in a particular application. Two widely used models---surface energy balance algorithm for land (SEBAL) and mapping ET at high resolution with internalized calibration (METRIC)---need substantial human-intervention that limits their applicability in broad-scale studies. This dissertation addressed some of these challenges by proposing novel ways to optimize available resources within the SEB-based ET modeling framework. A simple regression-based Landsat-Moderate Resolution Imaging Spectroradiometer (MODIS) fusion model was developed to integrate Landsat spatial and MODIS temporal characteristics in calculating ET. The fusion model produced reliable estimates of seasonal ET at moderate spatial resolution while mitigating the impact that cloud cover can have on image availability. The dissertation also evaluated five commonly used remote sensing-based single-source SEB models and found the surface energy balance system (SEBS) may be the best overall model for use in humid subtropical climates. The study also determined that model accuracy varies with land cover type, for example, all models worked well for wet marsh conditions, but the SEBAL and simplified surface energy balance index (S-SEBI) models worked better than the alternatives for grass cover. A new automated approach based on
Francois, Marianne M; Carlson, Neil N
2010-01-01
Understanding the complex interaction of droplet dynamics with mass transfer and chemical reactions is of fundamental importance in liquid-liquid extraction. High-fidelity numerical simulation of droplet dynamics with interfacial mass transfer is particularly challenging because the position of the interface between the fluids and the interface physics need to be predicted as part of the solution of the flow equations. In addition, the discontinuity in fluid density, viscosity and species concentration at the interface present additional numerical challenges. In this work, we extend our balanced-force volume-tracking algorithm for modeling surface tension force (Francois et al., 2006) and we propose a global embedded interface formulation to model the interfacial conditions of an interface in thermodynamic equilibrium. To validate our formulation, we perform simulations of pure diffusion problems in one- and two-dimensions. Then we present two and three-dimensional simulations of a single droplet dynamics rising by buoyancy with mass transfer.
Shabani, Hamed; Vahidi, Behrooz; Ebrahimpour, Majid
2013-01-01
A new PID controller for resistant differential control against load disturbance is introduced that can be used for load frequency control (LFC) application. Parameters of the controller have been specified by using imperialist competitive algorithm (ICA). Load disturbance, which is due to continuous and rapid changes of small loads, is always a problem for load frequency control of power systems. This paper introduces a new method to overcome this problem that is based on filtering technique which eliminates the effect of this kind of disturbance. The object is frequency regulation in each area of the power system and decreasing of power transfer between control areas, so the parameters of the proposed controller have been specified in a wide range of load changes by means of ICA to achieve the best dynamic response of frequency. To evaluate the effectiveness of the proposed controller, a three-area power system is simulated in MATLAB/SIMULINK. Each area has different generation units, so utilizes controllers with different parameters. Finally a comparison between the proposed controller and two other prevalent PI controllers, optimized by GA and Neural Networks, has been done which represents advantages of this controller over others.
NASA Astrophysics Data System (ADS)
Jung, Sungmo; Song, Jae-Gu; Kim, Seoksoo
The problem of one marker one object loading only available in marker-based augmented reality technology can be solved by PPHT-based augmented reality multiple objects loading technology which detects the same marker in the image and copies at a desirable location. However, since the distance between markers will not be measured in the process of detecting and copying markers, markers can be overlapped and thus the objects would not be augmented. To solve this problem, a circle having the longest radius needs to be created from a focal point of a marker to be copied, so that no object is copied within the confines of the circle. Therefore, a marker overlapping control for M2M-based augmented reality multiple object loading has been studied using Bresenham algorithm.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N. B.
2010-07-01
Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial scales. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at varying temporal and spatial scales under complex terrain. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can fully account for the dynamic impacts of complex terrain and changing land cover in concert with some varying kinetic parameters (i.e., roughness and zero-plane displacement) over time. Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Liu, C. S.; Gao, W.; Chang, N.-B.
2011-01-01
Evapotranspiration (ET) may be used as an ecological indicator to address the ecosystem complexity. The accurate measurement of ET is of great significance for studying environmental sustainability, global climate changes, and biodiversity. Remote sensing technologies are capable of monitoring both energy and water fluxes on the surface of the Earth. With this advancement, existing models, such as SEBAL, S_SEBI and SEBS, enable us to estimate the regional ET with limited temporal and spatial coverage in the study areas. This paper extends the existing modeling efforts with the inclusion of new components for ET estimation at different temporal and spatial scales under heterogeneous terrain with varying elevations, slopes and aspects. Following a coupled remote sensing and surface energy balance approach, this study emphasizes the structure and function of the Surface Energy Balance with Topography Algorithm (SEBTA). With the aid of the elevation and landscape information, such as slope and aspect parameters derived from the digital elevation model (DEM), and the vegetation cover derived from satellite images, the SEBTA can account for the dynamic impacts of heterogeneous terrain and changing land cover with some varying kinetic parameters (i.e., roughness and zero-plane displacement). Besides, the dry and wet pixels can be recognized automatically and dynamically in image processing thereby making the SEBTA more sensitive to derive the sensible heat flux for ET estimation. To prove the application potential, the SEBTA was carried out to present the robust estimates of 24 h solar radiation over time, which leads to the smooth simulation of the ET over seasons in northern China where the regional climate and vegetation cover in different seasons compound the ET calculations. The SEBTA was validated by the measured data at the ground level. During validation, it shows that the consistency index reached 0.92 and the correlation coefficient was 0.87.
Buentig, Wolf E.; Earley, Laurence E.
1971-01-01
The mechanism of glomerulotubular balance was investigated by microperfusion of the rat proximal tubule at two different rates before and after contriction of the aorta sufficient to produce a 50% reduction in whole kidney filtration rate and plasma flow. At a perfusion rate of 28 nl/min the absolute rate of proximal tubular reabsorption averaged 4.80±0.28 nl/mm·min in the absence of aortic constriction. Reducing the perfusion rate by one-half resulted in only a 22% decrease in the absolute rate of reabsorption, and imbalance between load and reabsorption resulted as fractional reabsorption of the perfused volume increased from 0.56 to 0.83 at 3 mm length of perfused tubule. These observations support other studies indicating that changing the load presented to the individual proximal tubule does not change reabsorptive rate sufficiently to result in glomerulotubular balance. Aortic constriction decreased the absolute rate of proximal tubular reabsorption approximately 50%, resulting in imbalance between load and reabsorption at the higher perfusion rate (fractional reabsorption of the perfused volume fell to 0.23 at 3 mm). Thus, the decrease in proximal tubular reabsorption necessary for glomerulotubular balance will occur independent of a change in the load presented for reabsorption. Balance between load and reabsorption was produced artificially by combining aortic constriction and a reduction in perfusion rate proportional to the reduction in whole kidney filtration rate. Mathematical analysis of the data suggests that the absolute rate of reabsorption along the accessible length of the proximal tubule is constant and is not proportional to the volume of fluid reaching a given site. Thus, there appears to be no contribution to glomerulotubular balance of any intra- or extratubular mechanism directly coupling load and the rate of proximal tubular reabsorption. It is concluded that glomerulotubular balance during aortic constriction is a consequence of
NASA Astrophysics Data System (ADS)
Tsuzuki, Satori; Aoki, Takayuki
2016-04-01
Numerical simulations for debris flows including a countless of objects is one of important topics in fluid dynamics and many engineering applications. Particle-based method is a promising approach to carry out the simulations for flows interacting with objects. In this paper, we propose an efficient method to realize a large-scale simulation for fluid-structure interaction by combining SPH (Smoothed Particle Hydrodynamics) method for fluid with DEM (Discrete Element Method) for objects on a multi-GPU system. By applying space filling curves to decomposition of the computational domain, we are able to contain the same number of particles in each decomposed domain. In our implementation, several techniques for particle counting and data movement have been introduced. Fragmentation of the memory used for particles happens during the time-integration and the frequency of de-fragmentation is examined by taking account for computational load balance and the communication cost between CPU and GPU. A link-list technique of the particle interaction is introduced to save the memory drastically. It is found that the sorting of particle data for the neighboring particle list using linked-list method improves the memory access greatly with a certain interval. The weak and strong scalabilities for a SPH simulation using 111 Million particles was measured from 4 GPUs to 512 GPUs for three types of space filling curves. A large-scale debris flow simulation of tsunami with 10,368 floating rubbles using 117 Million particles were successfully carried out with 256 GPUs on the TSUBAME 2.5 supercomputer at Tokyo Institute of Technology.
Akbari, H.; Rainer, L.; Heinemeier, K.; Huang, J.; Franconi, E.
1993-01-01
The Southern California Edison Company (SCE) has conducted an extensive metering project in which electricity end use in 53 commercial buildings in Southern California has been measured. The building types monitored include offices, retail stores, groceries, restaurants, and warehouses. One year (June 1989 through May 1990) of the SCE measured hourly end-use data are reviewed in this report. Annual whole-building and end-use energy use intensities (EUIs) and monthly load shapes (LSs) have been calculated for the different building types based on the monitored data. This report compares the monitored buildings' EUIs and LSs to EUIs and LSs determined using whole-building load data and the End-Use Disaggregation Algorithm (EDA). Two sets of EDA determined EUIs and LSs are compared to the monitored data values. The data sets represent: (1) average buildings in the SCE service territory and (2) specific buildings that were monitored.
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.
1979-01-01
The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction
NASA Astrophysics Data System (ADS)
Rainieri, Carlo; Fabbrocino, Giovanni
2015-08-01
In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous
Bahat, Oded; Sullivan, Richard M
2010-05-01
Immediate loading of dental implants has become a widely reported practice with success rates ranging from 70.8% to 100%. Although most studies have considered implant survival to be the only measure of success, a better definition includes the long-term stability of the hard and soft tissues around the implant(s) and other adjacent structures, as well as the long-term stability of all the restorative components. The parameters identified in 1981 by Albrektsson and colleagues as influencing the establishment and maintenance of osseointegration have been reconsidered in relation to immediate loading to improve the chances of achieving such success. Two of the six parameters (status of the bone/implant site and implant loading conditions) have preoperative diagnostic implications, whereas three (implant design, surgical technique, and implant finish) may compensate for less-than-ideal site and loading conditions. Factors affecting the outcome of immediate loading are reviewed to assist clinicians attempting to assess its risks and benefits. PMID:20455906
Jablonski, Ireneusz; Mroczka, Janusz
2010-01-01
The paper offers an enhancement of the classical interrupter technique algorithm dedicated to respiratory mechanics measurements. Idea consists in exploitation of information contained in postocclusional transient states during indirect measurement of parameter characteristics by model identification. It needs the adequacy of an inverse analogue to general behavior of the real system and a reliable algorithm of parameter estimation. The second one was a subject of reported works, which finally showed the potential of the approach to separation of airway and tissue response in a case of short-term excitation by interrupter valve operation. Investigations were conducted in a regime of forward-inverse computer experiment.
ERIC Educational Resources Information Center
Ngu, Bing Hiong; Phan, Huy Phuong
2016-01-01
We examined the use of balance and inverse methods in equation solving. The main difference between the balance and inverse methods lies in the operational line (e.g. +2 on both sides vs -2 becomes +2). Differential element interactivity favours the inverse method because the interaction between elements occurs on both sides of the equation for…
Technology Transfer Automated Retrieval System (TEKTRAN)
An intercomparison of output from two models estimating spatially distributed surface energy fluxes from remotely sensed imagery is conducted. A major difference between the two models is whether the soil and vegetation components of the scene are treated separately (Two-Source Energy Balance; TSEB ...
Saito, Masatoshi
2010-08-15
Purpose: This article describes the spectral optimization of dual-energy computed tomography using balanced filters (bf-DECT) to reduce the tube loadings and dose by dedicating to the acquisition of electron density information, which is essential for treatment planning in radiotherapy. Methods: For the spectral optimization of bf-DECT, the author calculated the beam-hardening error and air kerma required to achieve a desired noise level in an electron density image of a 50-cm-diameter cylindrical water phantom. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. Results: The optimal combination of tube voltages was 80 kV/140 kV in conjunction with Tb/Hf and Bi/Mo filter pairs; this combination agrees with that obtained in a previous study [M. Saito, ''Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method,'' Med. Phys. 36, 3631-3642 (2009)], although the thicknesses of the filters that yielded a minimum tube output were slightly different from those obtained in the previous study. The resultant tube loading of a low-energy scan of the present bf-DECT significantly decreased from 57.5 to 4.5 times that of a high-energy scan for conventional DECT. Furthermore, the air kerma of bf-DECT could be reduced to less than that of conventional DECT, while obtaining the same figure of merit for the measurement of electron density and effective atomic number. Conclusions: The tube-loading and dose efficiencies of bf-DECT were considerably improved by sacrificing the quality of the noise level in the images of effective atomic number.
NASA Astrophysics Data System (ADS)
Gharehbaghi, Sadjad; Khatibinia, Mohsen
2015-03-01
A reliable seismic-resistant design of structures is achieved in accordance with the seismic design codes by designing structures under seven or more pairs of earthquake records. Based on the recommendations of seismic design codes, the average time-history responses (ATHR) of structure is required. This paper focuses on the optimal seismic design of reinforced concrete (RC) structures against ten earthquake records using a hybrid of particle swarm optimization algorithm and an intelligent regression model (IRM). In order to reduce the computational time of optimization procedure due to the computational efforts of time-history analyses, IRM is proposed to accurately predict ATHR of structures. The proposed IRM consists of the combination of the subtractive algorithm (SA), K-means clustering approach and wavelet weighted least squares support vector machine (WWLS-SVM). To predict ATHR of structures, first, the input-output samples of structures are classified by SA and K-means clustering approach. Then, WWLS-SVM is trained with few samples and high accuracy for each cluster. 9- and 18-storey RC frames are designed optimally to illustrate the effectiveness and practicality of the proposed IRM. The numerical results demonstrate the efficiency and computational advantages of IRM for optimal design of structures subjected to time-history earthquake loads.
NASA Astrophysics Data System (ADS)
Biswas, Papun; Chakraborti, Debjani
2010-10-01
This paper describes how the genetic algorithms (GAs) can be efficiently used to fuzzy goal programming (FGP) formulation of optimal power flow problems having multiple objectives. In the proposed approach, the different constraints, various relationships of optimal power flow calculations are fuzzily described. In the model formulation of the problem, the membership functions of the defined fuzzy goals are characterized first for measuring the degree of achievement of the aspiration levels of the goals specified in the decision making context. Then, the achievement function for minimizing the regret for under-deviations from the highest membership value (unity) of the defined membership goals to the extent possible on the basis of priorities is constructed for optimal power flow problems. In the solution process, the GA method is employed to the FGP formulation of the problem for achievement of the highest membership value (unity) of the defined membership functions to the extent possible in the decision making environment. In the GA based solution search process, the conventional Roulette wheel selection scheme, arithmetic crossover and random mutation are taken into consideration to reach a satisfactory decision. The developed method has been tested on IEEE 6-generator 30-bus System. Numerical results show that this method is promising for handling uncertain constraints in practical power systems.
Optimal Load Control via Frequency Measurement and Neighborhood Area Communication
Zhao, CH; Topcu, U; Low, SH
2013-11-01
We propose a decentralized optimal load control scheme that provides contingency reserve in the presence of sudden generation drop. The scheme takes advantage of flexibility of frequency responsive loads and neighborhood area communication to solve an optimal load control problem that balances load and generation while minimizing end-use disutility of participating in load control. Local frequency measurements enable individual loads to estimate the total mismatch between load and generation. Neighborhood area communication helps mitigate effects of inconsistencies in the local estimates due to frequency measurement noise. Case studies show that the proposed scheme can balance load with generation and restore the frequency within seconds of time after a generation drop, even when the loads use a highly simplified power system model in their algorithms. We also investigate tradeoffs between the amount of communication and the performance of the proposed scheme through simulation-based experiments.
Kim, Mijin; Hyun, Seunghun; Kwon, Jung-Hwan
2015-10-01
The accumulation of marine plastic debris is one of the main emerging environmental issues of the twenty first century. Numerous studies in recent decades have reported the level of plastic particles on the beaches and in oceans worldwide. However, it is still unclear how much plastic debris remains in the marine environment because the sampling methods for identifying and quantifying plastics from the environment have not been standardized; moreover, the methods are not guaranteed to find all of the plastics that do remain. The level of identified marine plastic debris may explain only the small portion of remaining plastics. To perform a quantitative estimation of remaining plastics, a mass balance analysis was performed for high- and low-density PE within the borders of South Korea during 1995-2012. Disposal methods such as incineration, land disposal, and recycling accounted for only approximately 40 % of PE use, whereas 60 % remained unaccounted for. The total unaccounted mass of high- and low-density PE to the marine environment during the evaluation period was 28 million tons. The corresponding contribution to marine plastic debris would be approximately 25,000 tons and 70 g km(-2) of the world oceans assuming that the fraction entering the marine environment is 0.001 and that the degradation half-life is 50 years in seawater. Because the observed concentrations of plastics worldwide were much lower than the range expected by extrapolation from this mass balance study, it is considered that there probably is still a huge mass of unidentified plastic debris. Further research is therefore needed to fill this gap between the mass balance approximation and the identified marine plastics including a better estimation of the mass flux to the marine environment.
Kim, Mijin; Hyun, Seunghun; Kwon, Jung-Hwan
2015-10-01
The accumulation of marine plastic debris is one of the main emerging environmental issues of the twenty first century. Numerous studies in recent decades have reported the level of plastic particles on the beaches and in oceans worldwide. However, it is still unclear how much plastic debris remains in the marine environment because the sampling methods for identifying and quantifying plastics from the environment have not been standardized; moreover, the methods are not guaranteed to find all of the plastics that do remain. The level of identified marine plastic debris may explain only the small portion of remaining plastics. To perform a quantitative estimation of remaining plastics, a mass balance analysis was performed for high- and low-density PE within the borders of South Korea during 1995-2012. Disposal methods such as incineration, land disposal, and recycling accounted for only approximately 40 % of PE use, whereas 60 % remained unaccounted for. The total unaccounted mass of high- and low-density PE to the marine environment during the evaluation period was 28 million tons. The corresponding contribution to marine plastic debris would be approximately 25,000 tons and 70 g km(-2) of the world oceans assuming that the fraction entering the marine environment is 0.001 and that the degradation half-life is 50 years in seawater. Because the observed concentrations of plastics worldwide were much lower than the range expected by extrapolation from this mass balance study, it is considered that there probably is still a huge mass of unidentified plastic debris. Further research is therefore needed to fill this gap between the mass balance approximation and the identified marine plastics including a better estimation of the mass flux to the marine environment. PMID:26153107
NASA Astrophysics Data System (ADS)
Bhadha, J. H.; Jawitz, J. W.; Min, J.
2009-12-01
Internal loading is a critical component of the phosphorus (P) budget of aquatic systems, and can control the trophic conditions. While diffusion is generally considered the dominant process controlling internal P load to the water column, advection due to water table fluctuations resulting from episodic flooding and drying cycles can be a significant component of the P budget of depressional wetlands. Within the drainage basin of Lake Okeechobee, Florida, P is exported annually to the lake from impacted isolated wetlands located on beef farming facilities via ditches and canals. The objective of this study was to evaluate the role of diffusive and advective fluxes in relation to the total P loads entering and exiting one of these isolated wetlands. Diffusive fluxes were calculated from depth-variable pore water concentrations measured using multilevel samplers and pore water equilibrators. Advective fluxes were estimated based on groundwater fluctuations calculated within a hydrologic-budget framework. Results from an eleven-month monitoring period (May 2005-March 2006) indicated that the diffusive flux of soluble reactive P (SRP) was 0.42 ± 0.24 mg m-2 d-1 and occurred for 230 days out of 335. In comparison, the advective flux occurred over a shorter duration of just 21 days, yet generated a greater flux controlled by the concentrations of shallow pore water and the velocity of the ground water moving upwards into the wetland water column. The highest advective flux of SRP was estimated at 27.4 mg m-2 d-1. Based on these fluxes the corresponding P load to the wetland via internal modes was estimated at 5.2 kg and 0.93 kg from diffusion and advection respectively, representing a significant fraction of the total P load entering the wetland water column. Plant colonization during dry periods in P enriched soils is also a significant mechanism for P release from the soil at the time of flooding, however, this component to the wetland P budget was not evaluated as
Li, Bai; Chiong, Raymond; Lin, Mu
2015-02-01
Protein structure prediction is a fundamental issue in the field of computational molecular biology. In this paper, the AB off-lattice model is adopted to transform the original protein structure prediction scheme into a numerical optimization problem. We present a balance-evolution artificial bee colony (BE-ABC) algorithm to address the problem, with the aim of finding the structure for a given protein sequence with the minimal free-energy value. This is achieved through the use of convergence information during the optimization process to adaptively manipulate the search intensity. Besides that, an overall degradation procedure is introduced as part of the BE-ABC algorithm to prevent premature convergence. Comprehensive simulation experiments based on the well-known artificial Fibonacci sequence set and several real sequences from the database of Protein Data Bank have been carried out to compare the performance of BE-ABC against other algorithms. Our numerical results show that the BE-ABC algorithm is able to outperform many state-of-the-art approaches and can be effectively employed for protein structure optimization. PMID:25463349
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-07-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
Parallel algorithms for dynamically partitioning unstructured grids
Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.
1994-10-01
Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.
Vandamme, Elke; Wissuwa, Matthias; Rose, Terry; Dieng, Ibnou; Drame, Khady N.; Fofana, Mamadou; Senthilkumar, Kalimuthu; Venuprasad, Ramaiah; Jallow, Demba; Segda, Zacharie; Suriyagoda, Lalith; Sirisena, Dinarathna; Kato, Yoichiro; Saito, Kazuki
2016-01-01
More than 60% of phosphorus (P) taken up by rice (Oryza spp.) is accumulated in the grains at harvest and hence exported from fields, leading to a continuous removal of P. If P removed from fields is not replaced by P inputs then soil P stocks decline, with consequences for subsequent crops. Breeding rice genotypes with a low concentration of P in the grains could be a strategy to reduce maintenance fertilizer needs and slow soil P depletion in low input systems. This study aimed to assess variation in grain P concentrations among rice genotypes across diverse environments and evaluate the implications for field P balances at various grain yield levels. Multi-location screening experiments were conducted at different sites across Africa and Asia and yield components and grain P concentrations were determined at harvest. Genotypic variation in grain P concentration was evaluated while considering differences in P supply and grain yield using cluster analysis to group environments and boundary line analysis to determine minimum grain P concentrations at various yield levels. Average grain P concentrations across genotypes varied almost 3-fold among environments, from 1.4 to 3.9 mg g−1. Minimum grain P concentrations associated with grain yields of 150, 300, and 500 g m−2 varied between 1.2 and 1.7, 1.3 and 1.8, and 1.7 and 2.2 mg g−1 among genotypes respectively. Two genotypes, Santhi Sufaid and DJ123, were identified as potential donors for breeding for low grain P concentration. Improvements in P balances that could be achieved by exploiting this genotypic variation are in the range of less than 0.10 g P m−2 (1 kg P ha−1) in low yielding systems, and 0.15–0.50 g P m−2 (1.5–5.0 kg P ha−1) in higher yielding systems. Improved crop management and alternative breeding approaches may be required to achieve larger reductions in grain P concentrations in rice. PMID:27729916
Waltham, Nathan J; Reichelt-Brushett, Amanda; McCann, Damian; Eyre, Bradley D
2014-12-01
Optimizing the utility of constructed waterways as residential development with water-frontage, along with a productive and functional habitat for wildlife is of considerable interest to managers. This study examines Lake Hugh Muntz, a large (17 ha) freshwater lake built in Gold Coast City, Australia. A ten year water quality monitoring programme shows that the lake has increasing nutrient concentrations, and together with summer algal blooms, the lake amenity as a popular recreational swimming and triathlon training location is at risk. A survey of fish and aquatic plant communities showed that the lake supports a sub-set of species found in adjacent natural wetlands. Sediment contaminants were below the lower Australian trigger values, except As, Hg, Pb and Zn, probably a function of untreated and uncontrolled stormwater runoff from nearby urban roads. Sediment biogeochemistry showed early signs of oxygen depletion, and an increase in benthic organic matter decomposition and oxygen consumption will result in more nitrogen recycled to the water column as NH4(+) (increasing the intensity of summer algal blooms) and less nitrogen lost to the atmosphere as N2 gas via denitrification. A series of catchment restoration initiatives were modeled and the optimal stormwater runoff restoration effort needed for lake protection will be costly, particularly retrospective, as is the case here. Overall, balancing the lifestyles and livelihoods of residents along with ecosystem protection are possible, but require considerable trade-offs between ecosystem services and human use. PMID:25384753
Waltham, Nathan J; Reichelt-Brushett, Amanda; McCann, Damian; Eyre, Bradley D
2014-12-01
Optimizing the utility of constructed waterways as residential development with water-frontage, along with a productive and functional habitat for wildlife is of considerable interest to managers. This study examines Lake Hugh Muntz, a large (17 ha) freshwater lake built in Gold Coast City, Australia. A ten year water quality monitoring programme shows that the lake has increasing nutrient concentrations, and together with summer algal blooms, the lake amenity as a popular recreational swimming and triathlon training location is at risk. A survey of fish and aquatic plant communities showed that the lake supports a sub-set of species found in adjacent natural wetlands. Sediment contaminants were below the lower Australian trigger values, except As, Hg, Pb and Zn, probably a function of untreated and uncontrolled stormwater runoff from nearby urban roads. Sediment biogeochemistry showed early signs of oxygen depletion, and an increase in benthic organic matter decomposition and oxygen consumption will result in more nitrogen recycled to the water column as NH4(+) (increasing the intensity of summer algal blooms) and less nitrogen lost to the atmosphere as N2 gas via denitrification. A series of catchment restoration initiatives were modeled and the optimal stormwater runoff restoration effort needed for lake protection will be costly, particularly retrospective, as is the case here. Overall, balancing the lifestyles and livelihoods of residents along with ecosystem protection are possible, but require considerable trade-offs between ecosystem services and human use.
Roghani, Tayebeh; Torkaman, Giti; Movasseghe, Shafieh; Hedayati, Mehdi; Goosheh, Babak; Bayat, Noushin
2013-02-01
The aim of this study is to evaluate the effect of submaximal aerobic exercise with and without external loading on bone metabolism and balance in postmenopausal women with osteoporosis (OP). Thirty-six volunteer, sedentary postmenopausal women with OP were randomly divided into three groups: aerobic, weighted vest, and control. Exercise for the aerobic group consisted of 18 sessions of submaximal treadmill walking, 30 min daily, 3 times a week. The exercise program for the weighted-vest group was identical to that of the aerobic group except that the subjects wore a weighted vest (4-8 % of body weight). Body composition, bone biomarkers, bone-specific alkaline phosphatase (BALP) and N-terminal telopeptide of type 1 collagen (NTX), and balance (near tandem stand, NTS, and star-excursion, SE) were measured before and after the 6-week exercise program. Fat decreased (p = 0.01) and fat-free mass increased (p = 0.005) significantly in the weighted-vest group. BALP increased and NTX decreased significantly in both exercise groups (p ≤ 0.05). After 6 weeks of exercise, NTS score increased in the exercise groups and decreased in the control group (aerobic: +49.68 %, weighted vest: +104.66 %, and control: -28.96 %). SE values for all directions increased significantly in the weighted-vest group. Results showed that the two exercise programs stimulate bone synthesis and decrease bone resorption in postmenopausal women with OP, but that exercise while wearing a weighted vest is better for improving balance.
Massaro, Luciana; Massa, Fabrizio; Simpson, Kathy; Fragaszy, Dorothy; Visalberghi, Elisabetta
2016-04-01
The ability to carry objects has been considered an important selective pressure favoring the evolution of bipedal locomotion in early hominins. Comparable behaviors by extant primates have been studied very little, as few primates habitually carry objects bipedally. However, wild bearded capuchins living at Fazenda Boa Vista spontaneously and habitually transport stone tools by walking bipedally, allowing us to examine the characteristics of bipedal locomotion during object transport by a generalized primate. In this pilot study, we investigated the mechanical aspects of position and velocity of the center of mass, trunk inclination, and forelimb postures, and the torque of the forces applied on each anatomical segment in wild bearded capuchin monkeys during the transport of objects, with particular attention to the tail and its role in balancing the body. Our results indicate that body mass strongly affects the posture of transport and that capuchins are able to carry heavy loads bipedally with a bent-hip-bent-knee posture, thanks to the "strategic" use of their extendable tail; in fact, without this anatomical structure, constituting only 5 % of their body mass, they would be unable to transport the loads that they habitually carry.
Massaro, Luciana; Massa, Fabrizio; Simpson, Kathy; Fragaszy, Dorothy; Visalberghi, Elisabetta
2016-04-01
The ability to carry objects has been considered an important selective pressure favoring the evolution of bipedal locomotion in early hominins. Comparable behaviors by extant primates have been studied very little, as few primates habitually carry objects bipedally. However, wild bearded capuchins living at Fazenda Boa Vista spontaneously and habitually transport stone tools by walking bipedally, allowing us to examine the characteristics of bipedal locomotion during object transport by a generalized primate. In this pilot study, we investigated the mechanical aspects of position and velocity of the center of mass, trunk inclination, and forelimb postures, and the torque of the forces applied on each anatomical segment in wild bearded capuchin monkeys during the transport of objects, with particular attention to the tail and its role in balancing the body. Our results indicate that body mass strongly affects the posture of transport and that capuchins are able to carry heavy loads bipedally with a bent-hip-bent-knee posture, thanks to the "strategic" use of their extendable tail; in fact, without this anatomical structure, constituting only 5 % of their body mass, they would be unable to transport the loads that they habitually carry. PMID:26733456
NASA Astrophysics Data System (ADS)
Reba, M. L.; Marks, D.; Link, T.; Pomeroy, J.; Winstral, A.
2007-12-01
Energy balance models use physically based principles to simulate snow cover accumulation and melt. Snobal, a snow cover energy balance model, uses a flux-profile approach to calculating the turbulent flux (sensible and latent heat flux) components of the energy balance. Historically, validation data for turbulent flux simulations have been difficult to obtain at snow dominated sites characterized by complex terrain and heterogeneous vegetation. Currently, eddy covariance (EC) is the most defensible method available to measure turbulent flux and hence to validate this component of an energy balance model. EC was used to measure sensible and latent heat flux at two sites over three winter seasons (2004, 2005, and 2006). Both sites are located in Reynolds Creek Experimental Watershed in southwestern Idaho, USA and are characterized as semi-arid rangeland. One site is on a wind-exposed ridge with small shrubs and the other is in a wind-protected area in a small aspen stand. EC data were post processed from 10 Hz measurements. The first objective of this work was to compare EC- measured sensible and latent heat flux and sublimation/condensation to Snobal-simulated values. Comparisons were made on several temporal scales, including inter-annual, seasonal and diurnal. The flux- profile method used in Snobal assumes equal roughness lengths for moisture and temperature, and roughness lengths are constant and not a function of stability. Furthermore, there has been extensive work on improving profile function constants that is not considered in the current version of Snobal. Therefore, the second objective of this work was to modify the turbulent flux algorithm in Snobal. Modifications were made to calculate roughness lengths as a function of stability and separately for moisture and temperature. Also, more recent formulations of the profile function constants were incorporated. The third objective was to compare EC-measured sensible and latent heat flux and sublimation
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Bader, Jon B.
2009-01-01
Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of watershed-scale nutrient load estimates...
Parallel contact detection algorithm for transient solid dynamics simulations using PRONTO3D
Attaway, S.W.; Hendrickson, B.A.; Plimpton, S.J.
1996-09-01
An efficient, scalable, parallel algorithm for treating material surface contacts in solid mechanics finite element programs has been implemented in a modular way for MIMD parallel computers. The serial contact detection algorithm that was developed previously for the transient dynamics finite element code PRONTO3D has been extended for use in parallel computation by devising a dynamic (adaptive) processor load balancing scheme.
Better Bonded Ethernet Load Balancing
Gabler, Jason
2006-09-29
When a High Performance Storage System's mover shuttles large amounts of data to storage over a single Ethernet device that single channel can rapidly become saturated. Using Linux Ethernet channel bonding to address this and similar situations was not, until now, a viable solution. The various modes in which channel bonding could be configured always offered some benefit but only under strict conditions or at a system resource cost that was greater than the benefit gained by using channel bonding. Newer bonding modes designed by various networking hardware companies, helpful in such networking scenarios, were already present in their own switches. However, Linux-based systems were unable to take advantage of those new modes as they had not yet been implemented in the Linux kernel bonding driver. So, except for basic fault tolerance, Linux channel bonding could not positively combine separate Ethernet devices to provide the necessary bandwidth.
Akbari, H.; Rainer, L.; Heinemeier, K.; Huang, J.; Franconi, E.
1993-01-01
The Southern California Edison Company (SCE) has conducted an extensive metering project in which electricity end use in 53 commercial buildings in Southern California has been measured. The building types monitored include offices, retail stores, groceries, restaurants, and warehouses. One year (June 1989 through May 1990) of the SCE measured hourly end-use data are reviewed in this report. Annual whole-building and end-use energy use intensities (EUIs) and monthly load shapes (LSs) have been calculated for the different building types based on the monitored data. This report compares the monitored buildings` EUIs and LSs to EUIs and LSs determined using whole-building load data and the End-Use Disaggregation Algorithm (EDA). Two sets of EDA determined EUIs and LSs are compared to the monitored data values. The data sets represent: (1) average buildings in the SCE service territory and (2) specific buildings that were monitored.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Balancing Surfaces § 23.425 Gust loads. (a) Each horizontal surface, other than a main wing, must be... for the conditions specified in paragraph (a) of this section, the initial balancing loads for steady... load resulting from the gusts must be added to the initial balancing load to obtain the total load....
Gokhale, Sharad; Kohajda, Tibor; Schlink, Uwe
2008-12-15
A number of past studies have shown the prevalence of a considerable amount of volatile organic compounds (VOCs) in workplace, home and outdoor microenvironments. The quantification of an individual's personal exposure to VOCs in each of these microenvironments is an essential task to recognize the health risks. In this paper, such a study of source apportionment of the human exposure to VOCs in homes, offices, and outdoors has been presented. Air samples, analysed for 25 organic compounds and sampled during one week in homes, offices, outdoors and close to persons, at seven locations in the city of Leipzig, have been utilized to recognize the concentration pattern of VOCs using the chemical mass balance (CMB) receptor model. In result, the largest contribution of VOCs to the personal exposure is from homes in the range of 42 to 73%, followed by outdoors, 18 to 34%, and the offices, 2 to 38% with the corresponding concentration ranges of 35 to 80 microg m(- 3), 10 to 45 microg m(- 3) and 1 to 30 microg m(- 3) respectively. The species such as benzene, dodecane, decane, methyl-cyclopentane, triethyltoluene and trichloroethylene dominate outdoors; methyl-cyclohexane, triethyltoluene, nonane, octane, tetraethyltoluene, undecane are highest in the offices; while, from the terpenoid group like 3-carane, limonene, a-pinene, b-pinene and the aromatics toluene and styrene most influence the homes. A genetic algorithm (GA) model has also been applied to carry out the source apportionment. Its results are comparable with that of CMB. PMID:18822447
Fast algorithm for relaxation processes in big-data systems
NASA Astrophysics Data System (ADS)
Hwang, S.; Lee, D.-S.; Kahng, B.
2014-10-01
Relaxation processes driven by a Laplacian matrix can be found in many real-world big-data systems, for example, in search engines on the World Wide Web and the dynamic load-balancing protocols in mesh networks. To numerically implement such processes, a fast-running algorithm for the calculation of the pseudoinverse of the Laplacian matrix is essential. Here we propose an algorithm which computes quickly and efficiently the pseudoinverse of Markov chain generator matrices satisfying the detailed-balance condition, a general class of matrices including the Laplacian. The algorithm utilizes the renormalization of the Gaussian integral. In addition to its applicability to a wide range of problems, the algorithm outperforms other algorithms in its ability to compute within a manageable computing time arbitrary elements of the pseudoinverse of a matrix of size millions by millions. Therefore our algorithm can be used very widely in analyzing the relaxation processes occurring on large-scale networked systems.
NASA Astrophysics Data System (ADS)
Mahto, Tarkeshwar; Mukherjee, V.
2016-09-01
In the present work, a two-area thermal-hybrid interconnected power system, consisting of a thermal unit in one area and a hybrid wind-diesel unit in other area is considered. Capacitive energy storage (CES) and CES with static synchronous series compensator (SSSC) are connected to the studied two-area model to compensate for varying load demand, intermittent output power and area frequency oscillation. A novel quasi-opposition harmony search (QOHS) algorithm is proposed and applied to tune the various tunable parameters of the studied power system model. Simulation study reveals that inclusion of CES unit in both the areas yields superb damping performance for frequency and tie-line power deviation. From the simulation results it is further revealed that inclusion of SSSC is not viable from both technical as well as economical point of view as no considerable improvement in transient performance is noted with its inclusion in the tie-line of the studied power system model. The results presented in this paper demonstrate the potential of the proposed QOHS algorithm and show its effectiveness and robustness for solving frequency and power drift problems of the studied power systems. Binary coded genetic algorithm is taken for sake of comparison.
CAST: Contraction Algorithm for Symmetric Tensors
Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy
2014-09-22
Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution that can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.
... version of this page please turn Javascript on. Balance Problems About Balance Problems Have you ever felt dizzy, lightheaded, or ... dizziness problem during the past year. Why Good Balance is Important Having good balance means being able ...
McKeever, John W; Reddy, Patel; Jahns, Thomas M
2007-05-01
Surface permanent magnet (SPM) synchronous machines using fractional-slot concentrated windings are being investigated as candidates for high-performance traction machines for automotive electric propulsion systems. It has been shown analytically and experimentally that such designs can achieve very wide constant-power speed ratios (CPSR) [1,2]. This work has shown that machines of this type are capable of achieving very low cogging torque amplitudes as well as significantly increasing the machine power density [3-5] compared to SPM machines using conventional distributed windings. High efficiency can be achieved in this class of SPM machine by making special efforts to suppress the eddy-current losses in the magnets [6-8], accompanied by efforts to minimize the iron losses in the rotor and stator cores. Considerable attention has traditionally been devoted to maximizing the full-load efficiency of traction machines at their rated operating points and along their maximum-power vs. speed envelopes for higher speeds [9,10]. For example, on-line control approaches have been presented for maximizing the full-load efficiency of PM synchronous machines, including the use of negative d-axis stator current to reduce the core losses [11,12]. However, another important performance specification for electric traction applications is the machine's efficiency at partial loads. Partial-load efficiency is particularly important if the target traction application requires long periods of cruising operation at light loads that are significantly lower than the maximum drive capabilities. While the design of the machine itself is clearly important, investigation has shown that this is a case where the choice of the control algorithm plays a critical role in determining the maximum partial-load efficiency that the machine actually achieves in the traction drive system. There is no evidence that this important topic has been addressed for this type of SPM machine by any other authors
Reddy, P.B.; Jahns, T.M.
2007-04-30
Surface permanent magnet (SPM) synchronous machines using fractional-slot concentrated windings are being investigated as candidates for high-performance traction machines for automotive electric propulsion systems. It has been shown analytically and experimentally that such designs can achieve very wide constant-power speed ratios (CPSR) [1,2]. This work has shown that machines of this type are capable of achieving very low cogging torque amplitudes as well as significantly increasing the machine power density [3-5] compared to SPM machines using conventional distributed windings. High efficiency can be achieved in this class of SPM machine by making special efforts to suppress the eddy-current losses in the magnets [6-8], accompanied by efforts to minimize the iron losses in the rotor and stator cores. Considerable attention has traditionally been devoted to maximizing the full-load efficiency of traction machines at their rated operating points and along their maximum-power vs. speed envelopes for higher speeds [9,10]. For example, on-line control approaches have been presented for maximizing the full-load efficiency of PM synchronous machines, including the use of negative d-axis stator current to reduce the core losses [11,12]. However, another important performance specification for electric traction applications is the machine's efficiency at partial loads. Partial-load efficiency is particularly important if the target traction application requires long periods of cruising operation at light loads that are significantly lower than the maximum drive capabilities. While the design of the machine itself is clearly important, investigation has shown that this is a case where the choice of the control algorithm plays a critical role in determining the maximum partial-load efficiency that the machine actually achieves in the traction drive system. There is no evidence that this important topic has been addressed for this type of SPM machine by any other authors
NASA Astrophysics Data System (ADS)
Ghaedi, M.; Azad, F. Nasiri; Dashtian, K.; Hajati, S.; Goudarzi, A.; Soylak, M.
2016-10-01
Maximum malachite green (MG) adsorption onto ZnO Nanorod-loaded activated carbon (ZnO-NR-AC) was achieved following the optimization of conditions, while the mass transfer was accelerated by ultrasonic. The central composite design (CCD) and genetic algorithm (GA) were used to estimate the effect of individual variables and their mutual interactions on the MG adsorption as response and to optimize the adsorption process. The ZnO-NR-AC surface morphology and its properties were identified via FESEM, XRD and FTIR. The adsorption equilibrium isotherm and kinetic models investigation revealed the well fit of the experimental data to Langmuir isotherm and pseudo-second-order kinetic model, respectively. It was shown that a small amount of ZnO-NR-AC (with adsorption capacity of 20 mg g- 1) is sufficient for the rapid removal of high amount of MG dye in short time (3.99 min).
Ghaedi, M; Azad, F Nasiri; Dashtian, K; Hajati, S; Goudarzi, A; Soylak, M
2016-10-01
Maximum malachite green (MG) adsorption onto ZnO Nanorod-loaded activated carbon (ZnO-NR-AC) was achieved following the optimization of conditions, while the mass transfer was accelerated by ultrasonic. The central composite design (CCD) and genetic algorithm (GA) were used to estimate the effect of individual variables and their mutual interactions on the MG adsorption as response and to optimize the adsorption process. The ZnO-NR-AC surface morphology and its properties were identified via FESEM, XRD and FTIR. The adsorption equilibrium isotherm and kinetic models investigation revealed the well fit of the experimental data to Langmuir isotherm and pseudo-second-order kinetic model, respectively. It was shown that a small amount of ZnO-NR-AC (with adsorption capacity of 20mgg(-1)) is sufficient for the rapid removal of high amount of MG dye in short time (3.99min).
Ghaedi, M; Azad, F Nasiri; Dashtian, K; Hajati, S; Goudarzi, A; Soylak, M
2016-10-01
Maximum malachite green (MG) adsorption onto ZnO Nanorod-loaded activated carbon (ZnO-NR-AC) was achieved following the optimization of conditions, while the mass transfer was accelerated by ultrasonic. The central composite design (CCD) and genetic algorithm (GA) were used to estimate the effect of individual variables and their mutual interactions on the MG adsorption as response and to optimize the adsorption process. The ZnO-NR-AC surface morphology and its properties were identified via FESEM, XRD and FTIR. The adsorption equilibrium isotherm and kinetic models investigation revealed the well fit of the experimental data to Langmuir isotherm and pseudo-second-order kinetic model, respectively. It was shown that a small amount of ZnO-NR-AC (with adsorption capacity of 20mgg(-1)) is sufficient for the rapid removal of high amount of MG dye in short time (3.99min). PMID:27318150
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
NASA Astrophysics Data System (ADS)
Tarroja, Brian; Eichman, Joshua D.; Zhang, Li; Brown, Tim M.; Samuelsen, Scott
2015-03-01
A study has been performed that analyzes the effectiveness of utilizing plug-in vehicles to meet holistic environmental goals across the combined electricity and transportation sectors. In this study, plug-in hybrid electric vehicle (PHEV) penetration levels are varied from 0 to 60% and base renewable penetration levels are varied from 10 to 63%. The first part focused on the effect of installing plug-in hybrid electric vehicles on the environmental performance of the combined electricity and transportation sectors. The second part addresses impacts on the design and operation of load-balancing resources on the electric grid associated with fleet capacity factor, peaking and load-following generator capacity, efficiency, ramp rates, start-up events and the levelized cost of electricity. PHEVs using smart charging are found to counteract many of the disruptive impacts of intermittent renewable power on balancing generators for a wide range of renewable penetration levels, only becoming limited at high renewable penetration levels due to lack of flexibility and finite load size. This study highlights synergy between sustainability measures in the electric and transportation sectors and the importance of communicative dispatch of these vehicles.
Irwin, John A.
1979-01-01
A gas turbine engine has an internal drive shaft including one end connected to a driven load and an opposite end connected to a turbine wheel and wherein the shaft has an in situ adjustable balance system near the critical center of a bearing span for the shaft including two 360.degree. rings piloted on the outer diameter of the shaft at a point accessible through an internal engine panel; each of the rings has a small amount of material removed from its periphery whereby both of the rings are precisely unbalanced an equivalent amount; the rings are locked circumferentially together by radial serrations thereon; numbered tangs on the outside diameter of each ring identify the circumferential location of unbalance once the rings are locked together; an aft ring of the pair of rings has a spline on its inside diameter that mates with a like spline on the shaft to lock the entire assembly together.
... our e-newsletter! Aging & Health A to Z Balance Problems Basic Facts & Information What are Balance Problems? Having good balance means being able to ... Only then can you “keep your balance.” Why Balance is Important Your feelings of dizziness may last ...
Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2015-01-01
An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.
... often, it could be a sign of a balance problem. Balance problems can make you feel unsteady or as ... fall-related injuries, such as hip fracture. Some balance problems are due to problems in the inner ...
NASA Technical Reports Server (NTRS)
Simanonok, K. E.; Fortney, S. M.; Ford, S. R.; Charles, J. B.; Ward, D. F.
1994-01-01
Shuttle astronauts currently drink approximately a quart of water with eight salt tablets before reentry to restore lost body fluid and thereby reduce the likelihood of cardiovascular instability and syncope during reentry and after landing. However, the saline loading countermeasure is not entirely effective in restoring orthostatic tolerance to preflight levels. We tested the hypothesis that the effectiveness of this countermeasure could be improved with the use of a vasopressin analog, 1-deamino-8-D-arginine vasopressin (dDAVP). The rationale for this approach is that reducing urine formation with exogenous vasopressin should increase the magnitude and duration of the vascular volume expansion produced by the saline load, and in so doing improve orthostatic tolerance during reentry and postflight.
NASA Astrophysics Data System (ADS)
Magirl, C. S.; Czuba, J. A.; Czuba, C. R.; Curran, C. A.
2012-12-01
Despite heavy sediment loads, large winter floods, and floodplain development, the rivers draining Mount Rainier, a 4,392-m glaciated stratovolcano within 85 km of sea level at Puget Sound, Washington, support important populations of anadromous salmonids, including Chinook salmon and steelhead trout, both listed as threatened under the Endangered Species Act. Aggressive river-management approaches of the early 20th century, such as bank armoring and gravel dredging, are being replaced by more ecologically sensitive approaches including setback levees. However, ongoing aggradation rates of up to 8 cm/yr in lowland reaches present acute challenges for resource managers tasked with ensuring flood protection without deleterious impacts to aquatic ecology. Using historical sediment-load data and a recent reservoir survey of sediment accumulation, rivers draining Mount Rainer were found to carry total sediment yields of 350 to 2,000 tonnes/km2/yr, notably larger than sediment yields of 50 to 200 tonnes/km2/yr typical for other Cascade Range rivers. An estimated 70 to 94% of the total sediment load in lowland reaches originates from the volcano. Looking toward the future, transport-capacity analyses and sediment-transport modeling suggest that large increases in bedload and associated aggradation will result from modest increases in rainfall and runoff that are predicted under future climate conditions. If large sediment loads and associated aggradation continue, creative solutions and long-term management strategies are required to protect people and structures in the floodplain downstream of Mount Rainier while preserving aquatic ecosystems.
Technology Transfer Automated Retrieval System (TEKTRAN)
Reliable estimation of the surface energy balance from local to regional scales is crucial for many applications including weather forecasting, hydrologic modeling, irrigation scheduling, water resource management, and climate change research, just to name a few. Numerous models have been developed ...
NASA Astrophysics Data System (ADS)
Eckert, C. H. J.; Zenker, E.; Bussmann, M.; Albach, D.
2016-10-01
We present an adaptive Monte Carlo algorithm for computing the amplified spontaneous emission (ASE) flux in laser gain media pumped by pulsed lasers. With the design of high power lasers in mind, which require large size gain media, we have developed the open source code HASEonGPU that is capable of utilizing multiple graphic processing units (GPUs). With HASEonGPU, time to solution is reduced to minutes on a medium size GPU cluster of 64 NVIDIA Tesla K20m GPUs and excellent speedup is achieved when scaling to multiple GPUs. Comparison of simulation results to measurements of ASE in Y b 3 + : Y AG ceramics show perfect agreement.
Resource Balancing Control Allocation
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Bodson, Marc
2010-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.
NASA Technical Reports Server (NTRS)
Srinivasan, R. S.; Simanonok, K. E.; Charles, J. B.
1994-01-01
Fluid loading (FL) before Shuttle reentry is a countermeasure currently in use by NASA to improve the orthostatic tolerance of astronauts during reentry and postflight. The fluid load consists of water and salt tablets equivalent to 32 oz (946 ml) of isotonic saline. However, the effectiveness of this countermeasure has been observed to decrease with the duration of spaceflight. The countermeasure's effectiveness may be improved by enhancing fluid retention using analogs of vasopressin such as lypressin (LVP) and desmopressin (dDAVP). In a computer simulation study reported previously, we attempted to assess the improvement in fluid retention obtained by the use of LVP administered before FL. The present study is concerned with the use of dDAVP. In a recent 24-hour, 6 degree head-down tilt (HDT) study involving seven men, dDAVP was found to improve orthostatic tolerance as assessed by both lower body negative pressure (LBNP) and stand tests. The treatment restored Luft's cumulative stress index (cumulative product of magnitude and duration of LBNP) to nearly pre-bedrest level. The heart rate was lower and stroke volume was marginally higher at the same LBNP levels with administration of dDAVP compared to placebo. Lower heart rates were also observed with dDAVP during stand test, despite the lower level of cardiovascular stress. These improvements were seen with only a small but significant increase in plasma volume of approximately 3 percent. This paper presents a computer simulation analysis of some of the results of this HDT study.
Ivanauskiene, Kristina; Delbarre, Erwan; McGhie, James D.; Küntziger, Thomas
2014-01-01
Histone variant H3.3 is deposited in chromatin at active sites, telomeres, and pericentric heterochromatin by distinct chaperones, but the mechanisms of regulation and coordination of chaperone-mediated H3.3 loading remain largely unknown. We show here that the chromatin-associated oncoprotein DEK regulates differential HIRA- and DAAX/ATRX-dependent distribution of H3.3 on chromosomes in somatic cells and embryonic stem cells. Live cell imaging studies show that nonnucleosomal H3.3 normally destined to PML nuclear bodies is re-routed to chromatin after depletion of DEK. This results in HIRA-dependent widespread chromatin deposition of H3.3 and H3.3 incorporation in the foci of heterochromatin in a process requiring the DAXX/ATRX complex. In embryonic stem cells, loss of DEK leads to displacement of PML bodies and ATRX from telomeres, redistribution of H3.3 from telomeres to chromosome arms and pericentric heterochromatin, induction of a fragile telomere phenotype, and telomere dysfunction. Our results indicate that DEK is required for proper loading of ATRX and H3.3 on telomeres and for telomeric chromatin architecture. We propose that DEK acts as a “gatekeeper” of chromatin, controlling chromatin integrity by restricting broad access to H3.3 by dedicated chaperones. Our results also suggest that telomere stability relies on mechanisms ensuring proper histone supply and routing. PMID:25049225
Ghaedi, M; Shojaeipour, E; Ghaedi, A M; Sahraei, Reza
2015-05-01
In this study, copper nanowires loaded on activated carbon (Cu-NWs-AC) was used as novel efficient adsorbent for the removal of malachite green (MG) from aqueous solution. This new material was synthesized through simple protocol and its surface properties such as surface area, pore volume and functional groups were characterized with different techniques such XRD, BET and FESEM analysis. The relation between removal percentages with variables such as solution pH, adsorbent dosage (0.005, 0.01, 0.015, 0.02 and 0.1g), contact time (1-40min) and initial MG concentration (5, 10, 20, 70 and 100mg/L) was investigated and optimized. A three-layer artificial neural network (ANN) model was utilized to predict the malachite green dye removal (%) by Cu-NWs-AC following conduction of 248 experiments. When the training of the ANN was performed, the parameters of ANN model were as follows: linear transfer function (purelin) at output layer, Levenberg-Marquardt algorithm (LMA), and a tangent sigmoid transfer function (tansig) at the hidden layer with 11 neurons. The minimum mean squared error (MSE) of 0.0017 and coefficient of determination (R(2)) of 0.9658 were found for prediction and modeling of dye removal using testing data set. A good agreement between experimental data and predicted data using the ANN model was obtained. Fitting the experimental data on previously optimized condition confirm the suitability of Langmuir isotherm models for their explanation with maximum adsorption capacity of 434.8mg/g at 25°C. Kinetic studies at various adsorbent mass and initial MG concentration show that the MG maximum removal percentage was achieved within 20min. The adsorption of MG follows the pseudo-second-order with a combination of intraparticle diffusion model. PMID:25699703
Ghaedi, M; Shojaeipour, E; Ghaedi, A M; Sahraei, Reza
2015-05-01
In this study, copper nanowires loaded on activated carbon (Cu-NWs-AC) was used as novel efficient adsorbent for the removal of malachite green (MG) from aqueous solution. This new material was synthesized through simple protocol and its surface properties such as surface area, pore volume and functional groups were characterized with different techniques such XRD, BET and FESEM analysis. The relation between removal percentages with variables such as solution pH, adsorbent dosage (0.005, 0.01, 0.015, 0.02 and 0.1g), contact time (1-40min) and initial MG concentration (5, 10, 20, 70 and 100mg/L) was investigated and optimized. A three-layer artificial neural network (ANN) model was utilized to predict the malachite green dye removal (%) by Cu-NWs-AC following conduction of 248 experiments. When the training of the ANN was performed, the parameters of ANN model were as follows: linear transfer function (purelin) at output layer, Levenberg-Marquardt algorithm (LMA), and a tangent sigmoid transfer function (tansig) at the hidden layer with 11 neurons. The minimum mean squared error (MSE) of 0.0017 and coefficient of determination (R(2)) of 0.9658 were found for prediction and modeling of dye removal using testing data set. A good agreement between experimental data and predicted data using the ANN model was obtained. Fitting the experimental data on previously optimized condition confirm the suitability of Langmuir isotherm models for their explanation with maximum adsorption capacity of 434.8mg/g at 25°C. Kinetic studies at various adsorbent mass and initial MG concentration show that the MG maximum removal percentage was achieved within 20min. The adsorption of MG follows the pseudo-second-order with a combination of intraparticle diffusion model.
NASA Astrophysics Data System (ADS)
Ghaedi, M.; Shojaeipour, E.; Ghaedi, A. M.; Sahraei, Reza
2015-05-01
In this study, copper nanowires loaded on activated carbon (Cu-NWs-AC) was used as novel efficient adsorbent for the removal of malachite green (MG) from aqueous solution. This new material was synthesized through simple protocol and its surface properties such as surface area, pore volume and functional groups were characterized with different techniques such XRD, BET and FESEM analysis. The relation between removal percentages with variables such as solution pH, adsorbent dosage (0.005, 0.01, 0.015, 0.02 and 0.1 g), contact time (1-40 min) and initial MG concentration (5, 10, 20, 70 and 100 mg/L) was investigated and optimized. A three-layer artificial neural network (ANN) model was utilized to predict the malachite green dye removal (%) by Cu-NWs-AC following conduction of 248 experiments. When the training of the ANN was performed, the parameters of ANN model were as follows: linear transfer function (purelin) at output layer, Levenberg-Marquardt algorithm (LMA), and a tangent sigmoid transfer function (tansig) at the hidden layer with 11 neurons. The minimum mean squared error (MSE) of 0.0017 and coefficient of determination (R2) of 0.9658 were found for prediction and modeling of dye removal using testing data set. A good agreement between experimental data and predicted data using the ANN model was obtained. Fitting the experimental data on previously optimized condition confirm the suitability of Langmuir isotherm models for their explanation with maximum adsorption capacity of 434.8 mg/g at 25 °C. Kinetic studies at various adsorbent mass and initial MG concentration show that the MG maximum removal percentage was achieved within 20 min. The adsorption of MG follows the pseudo-second-order with a combination of intraparticle diffusion model.
A Robust Load Shedding Strategy for Microgrid Islanding Transition
Liu, Guodong; Xiao, Bailu; Starke, Michael R; Ceylan, Oguzhan; Tomsovic, Kevin
2016-01-01
A microgrid is a group of interconnected loads and distributed energy resources. It can operate in either gridconnected mode to exchange energy with the main grid or run autonomously as an island in emergency mode. However, the transition of microgrid from grid-connected mode to islanded mode is usually associated with excessive load (or generation), which should be shed (or spilled). Under this condition, this paper proposes an robust load shedding strategy for microgrid islanding transition, which takes into account the uncertainties of renewable generation in the microgrid and guarantees the balance between load and generation after islanding. A robust optimization model is formulated to minimize the total operation cost, including fuel cost and penalty for load shedding. The proposed robust load shedding strategy works as a backup plan and updates at a prescribed interval. It assures a feasible operating point after islanding given the uncertainty of renewable generation. The proposed algorithm is demonstrated on a simulated microgrid consisting of a wind turbine, a PV panel, a battery, two distributed generators (DGs), a critical load and a interruptible load. Numerical simulation results validate the proposed algorithm.
Method and apparatus for calibrating multi-axis load cells in a dexterous robot
NASA Technical Reports Server (NTRS)
Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)
2012-01-01
A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.
NASA Astrophysics Data System (ADS)
Yin, Zhendong; Zong, Zhiyuan; Sun, Hongjian; Wu, Zhilu; Yang, Zhutian
2012-12-01
In this article, an efficient multiuser detector based on the artificial fish swarm algorithm (AFSA-MUD) is proposed and investigated for direct-sequence ultrawideband systems under different channels: the additive white Gaussian noise channel and the IEEE 802.15.3a multipath channel. From the literature review, the issues that the computational complexity of classical optimum multiuser detection (OMD) rises exponentially with the number of users and the bit error rate (BER) performance of other sub-optimal multiuser detectors is not satisfactory, still need to be solved. This proposed method can make a good tradeoff between complexity and performance through the various behaviors of artificial fishes in the simplified Euclidean solution space, which is constructed by the solutions of some sub-optimal multiuser detectors. Here, these sub-optimal detectors are minimum mean square error detector, decorrelating detector, and successive interference cancellation detector. As a result of this novel scheme, the convergence speed of AFSA-MUD is greatly accelerated and the number of iterations is also significantly reduced. The experimental results demonstrate that the BER performance and the near-far effect resistance of this proposed algorithm are quite close to those of OMD, while its computational complexity is much lower than the traditional OMD. Moreover, as the number of active users increases, the BER performance of AFSA-MUD is almost the same as that of OMD.
Chauhan, S S; Celi, P; Leury, B J; Dunshea, F R
2015-07-01
The objective of this study was to determine the efficacy of supranutritional dietary selenium and vitamin E (Vit E) to ameliorate the effect of heat stress (HS) on oxidative status and acid-base balance in sheep. Thirty-two Merino × Poll Dorset ewes were acclimated to indoor individual pen feeding of a pelleted control diet (0.24 g Se and 10 IU of Vit E/kg DM) for 1 wk. Sheep were then moved to metabolism cages in climatic chambers and randomly allocated to a 2 × 2 × 2 factorial design with the respective factors being dietary Se (0.24 and 1.20 mg/kg DM as Sel-Plex; Alltech, Australia), Vit E (10 and 100 IU/kg DM), and temperature for 2 wk. After 1 wk of acclimation in metabolic cages, 1 climatic chamber continued on thermoneutral (TN) conditions (18°C to 21°C and 40% to 50% relative humidity [RH]), and the other one was set to HS conditions (28°C to 40°C and 30% to 40% RH) for 1 wk. The sheep were then returned to individual pens and fed the control diet for 1 wk before being returned to the same diet as in the first period but a reversed thermal treatment for a further 2 wk. Physiological parameters were recorded 3 times daily, and blood samples were collected on d 1 and 7 of thermal treatment. Average respiration rate and rectal temperature of sheep were increased (P < 0.001) during HS; however, combined supranutritional supplementation of Se and Vit E reversed the effects of HS. Sheep given the high Se and high Vit E diet had a lower respiration rate (191 vs. 232 breaths/min; P = 0.012) and rectal temperature (40.33°C vs. 40.58°C; P = 0.039) under peak HS (1700 h) compared with those fed the low Se and low Vit E diet. Plasma reactive oxygen metabolites concentrations were reduced (P = 0.048) by 20%, whereas biological antioxidant potential was increased (P = 0.17) by 10% in sheep fed the high Se and high Vit E diet compared with those fed the low Se and low Vit E diet. Blood pH was elevated (P = 0.007) and bicarbonate was reduced (P = 0.049) under HS
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Erlebacher, G.
1994-01-01
While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
NASA Technical Reports Server (NTRS)
1981-01-01
Mechanical Technology, Incorporated developed a fully automatic laser machining process that allows more precise balancing removes metal faster, eliminates excess metal removal and other operator induced inaccuracies, and provides significant reduction in balancing time. Manufacturing costs are reduced as a result.
NASA Astrophysics Data System (ADS)
Kong, Xiangxi; Zhang, Xueliang; Chen, Xiaozhe; Wen, Bangchun; Wang, Bo
2016-05-01
In this paper, phase and speed synchronization control of four eccentric rotors (ERs) driven by induction motors in a linear vibratory feeder with unknown time-varying load torques is studied. Firstly, the electromechanical coupling model of the linear vibratory feeder is established by associating induction motor's model with the dynamic model of the system, which is a typical under actuated model. According to the characteristics of the linear vibratory feeder, the complex control problem of the under actuated electromechanical coupling model converts to phase and speed synchronization control of four ERs. In order to keep the four ERs operating synchronously with zero phase differences, phase and speed synchronization controllers are designed by employing adaptive sliding mode control (ASMC) algorithm via a modified master-slave structure. The stability of the controllers is proved by Lyapunov stability theorem. The proposed controllers are verified by simulation via Matlab/Simulink program and compared with the conventional sliding mode control (SMC) algorithm. The results show the proposed controllers can reject the time-varying load torques effectively and four ERs can operate synchronously with zero phase differences. Moreover, the control performance is better than the conventional SMC algorithm and the chattering phenomenon is attenuated. Furthermore, the effects of reference speed and parametric perturbations are discussed to show the strong robustness of the proposed controllers. Finally, experiments on a simple vibratory test bench are operated by using the proposed controllers and without control, respectively, to validate the effectiveness of the proposed controllers further.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng; Wei, Xuetao
2008-04-01
In this paper, we propose a novel robust routing algorithm based on Valiant load-balancing under the model of polyhedral uncertainty (i.e., hose uncertainty model) for WDM (wavelength division multiplexing) mesh networks. Valiant load-balanced robust routing algorithm constructs the stable virtual topology on which any traffic patterns under the hose uncertainty model can be efficiently routed. Considering there are multi-granularity connection requests in WDM mesh networks, we propose the method called hose-model separation to solve the problem for the proposed algorithm. Our goal is to minimize total network cost when constructing the stable virtual topology that assures robust routing for the hose model in WDM mesh networks. A mathematical formulation (integer linear programming, ILP) about Valiant load-balanced robust routing algorithm is presented. Two fast heuristic approaches are also proposed and evaluated. We compare the network throughput of the virtual topology constructed by the proposed algorithm with that of the traditional traffic grooming algorithm under the same total network cost by computer simulation.
Baby Carriage: Infants Walking with Loads
ERIC Educational Resources Information Center
Garciaguirre, Jessie S.; Adolph, Karen E.; Shrout, Patrick E.
2007-01-01
Maintaining balance is a central problem for new walkers. To examine how infants cope with the additional balance control problems induced by load carriage, 14-month-olds were loaded with 15% of their body weight in shoulder-packs. Both symmetrical and asymmetrical loads disrupted alternating gait patterns and caused less mature footfall patterns.…
14 CFR 23.423 - Maneuvering loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Balancing Surfaces § 23.423 Maneuvering loads. Each horizontal surface and its supporting structure, and the...-down pitching conditions is the sum of the balancing loads at V and the specified value of the normal... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maneuvering loads. 23.423 Section...
14 CFR 23.423 - Maneuvering loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Balancing Surfaces § 23.423 Maneuvering loads. Each horizontal surface and its supporting structure, and the...-down pitching conditions is the sum of the balancing loads at V and the specified value of the normal... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Maneuvering loads. 23.423 Section...
Frequency effects on the stability of a journal bearing for periodic loading
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Brewe, D. E.
1991-01-01
The stability of a journal bearing is numerically predicted when a unidirectional periodic external load is applied. The analysis is performed using a cavitation algorithm, which mimics the Jakobsson-Floberg and Olsson (JFO) theory by accounting for the mass balance through the complete bearing. Hence, the history of the film is taken into consideration. The loading pattern is taken to be sinusoidal and the frequency of the load cycle is varied. The results are compared with the predictions using Reynolds boundary conditions for both film rupture and reformation. With such comparisons, the need for accurately predicting the cavitation regions for complex loading patterns is clearly demonstrated. For a particular frequency of loading, the effects of mass, amplitude of load variation and frequency of journal speed are also investigated. The journal trajectories, transient variations in fluid film forces, net surface velocity and minimum film thickness, and pressure profiles are also presented.
Hydraulic Calibrator for Strain-Gauge Balances
NASA Technical Reports Server (NTRS)
Skelly, Kenneth; Ballard, John
1987-01-01
Instrument for calibrating strain-gauge balances uses hydraulic actuators and load cells. Eliminates effects of nonparallelism, nonperpendicularity, and changes of cable directions upon vector sums of applied forces. Errors due to cable stretching, pulley friction, and weight inaccuracy also eliminated. New instrument rugged and transportable. Set up quickly. Developed to apply known loads to wind-tunnel models with encapsulated strain-gauge balances, also adapted for use in calibrating dynamometers, load sensors on machinery and laboratory instruments.
Yu, Zhenhua; Fu, Xiao; Cai, Yuanli; Vuran, Mehmet C
2011-01-01
A reliable energy-efficient multi-level routing algorithm in wireless sensor networks is proposed. The proposed algorithm considers the residual energy, number of the neighbors and centrality of each node for cluster formation, which is critical for well-balanced energy dissipation of the network. In the algorithm, a knowledge-based inference approach using fuzzy Petri nets is employed to select cluster heads, and then the fuzzy reasoning mechanism is used to compute the degree of reliability in the route sprouting tree from cluster heads to the base station. Finally, the most reliable route among the cluster heads can be constructed. The algorithm not only balances the energy load of each node but also provides global reliability for the whole network. Simulation results demonstrate that the proposed algorithm effectively prolongs the network lifetime and reduces the energy consumption. PMID:22163802
... a new type of balance therapy using computerized, virtual reality. UPMC associate professor Susan Whitney, Ph.D., developed ... a virtual grocery store in the university's Medical Virtual Reality Center. Patients walk on a treadmill and safely ...
Metadata distribution algorithm based on directory hash in mass storage system
NASA Astrophysics Data System (ADS)
Wu, Wei; Luo, Dong-jian; Pei, Can-hao
2008-12-01
The distribution of metadata is very important in mass storage system. Many storage systems use subtree partition or hash algorithm to distribute the metadata among metadata server cluster. Although the system access performance is improved, the scalability problem is remarkable in most of these algorithms. This paper proposes a new directory hash (DH) algorithm. It treats directory as hash key value, implements a concentrated storage of metadata, and take a dynamic load balance strategy. It improves the efficiency of metadata distribution and access in mass storage system by hashing to directory and placing metadata together with directory granularity. DH algorithm has solved the scalable problems existing in file hash algorithm such as changing directory name or permission, adding or removing MDS from the cluster, and so on. DH algorithm reduces the additional request amount and the scale of each data migration in scalable operations. It enhances the scalability of mass storage system remarkably.
A Mathematical Model for Load Balancing
NASA Astrophysics Data System (ADS)
Oliveira, Suely; Soma, Takako
2009-08-01
We propose a family of semidefinite programming (SDP) relaxations of the problem of graph bisection with preferences. That is, given a graph G = (V,E) we wish to partition the vertices into two disjoint sets V = Pi∪P2 so as to minimize the sum of the number of edges cut by the partition and Σi∈Vxidi where xi = +1 if i ∈ P1 and xi = -1 otherwise. The SDP relaxation is related to well-known SDP and spectral relaxations for graph bisection without preferences. The preference vector d can be used to incorporate important information for recursive bisection for data distribution in parallel computers. This relaxation is analogous to an SDP relaxation of graph partitioning related to the spectral relaxation used to obtain the Fiedler vector.
Dynamic Load Balancing for Adaptive Unstructured Grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for computing unsteady three-dimensional problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture phenomena of interest, such procedures make standard computational methods more cost effective. Highly refined meshes are required to accurately capture shock waves, contact discontinuities, vortices, and shear layers in fluid flow problems. Adaptive meshes have also proved to be useful in several other areas of computational science and engineering like computer vision and graphics, semiconductor device modeling, and structural mechanics. Local mesh adaptation provides the opportunity to obtain solutions that are comparable to those obtained on globally-refined grids but at a much lower cost. Additional information is contained in the original extended abstract.
Lifeline-based Global Load Balancing
Saraswat, Vijay A.; Kambadur, Prabhanjan; Kodali, Sreedhar; Grove, David; Krishnamoorthy, Sriram
2011-02-12
On shared-memory systems, Cilk-style work-stealing has been used to effectively parallelize irregular task-graph based applications such as Unbalanced Tree Search (UTS). There are two main difficulties in extending this approach to distributed memory. In the shared memory approach, thieves (nodes without work) constantly attempt to asynchronously steal work from randomly chosen victims until they find work. In distributed memory, thieves cannot autonomously steal work from a victim without disrupting its execution. When work is sparse, this results in performance degradation. In essence, a direct extension of traditional work-stealing to distributed memory violates the work-first principle underlying work-stealing. Further, thieves spend useless CPU cycles attacking victims that have no work, resulting in system inefficiencies in multi-programmed contexts. Second, it is non-trivial to detect active distributed termination (detect that programs at all nodes are looking for work, hence there is no work). This problem is well-studied and requires careful design for good performance. Unfortunately, in most existing languages/frameworks, application developers are forced to implement their own distributed termination detection. In this paper, we develop a simple set of ideas that allow work-stealing to be efficiently extended to distributed memory. First, we introduce lifeline graphs: low-degree, low diameter, fully connected directed graphs. Such graphs can be constructed from k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, it quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. A quiesced node is reactivated when work arrives from a lifeline and itself shares this work with those of its incoming lifelines that are activated. Termination occurs precisely when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination can be detected automatically using the finish construct -- no application code is necessary.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2013-07-25
This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.
2013-04-30
The LMDT software automates the process of the load composite model data preparation in the format supported by the major power system software vendors (GE and Siemens). Proper representation of the load composite model in power system dynamic analysis is very important. Software tools for power system simulation like GE PSLF and Siemens PSSE already include algorithms for the load composite modeling. However, these tools require that the input information on composite load to bemore » provided in custom formats. Preparation of this data is time consuming and requires multiple manual operations. The LMDT software enables to automate this process. Software is designed to generate composite load model data. It uses the default load composition data, motor information, and bus information as an input. Software processes the input information and produces load composition model. Generated model can be stored in .dyd format supported by GE PSLF package or .dyr format supported by Siemens PSSE package.« less
David Chassin, Pavel Etingov
2013-04-30
The LMDT software automates the process of the load composite model data preparation in the format supported by the major power system software vendors (GE and Siemens). Proper representation of the load composite model in power system dynamic analysis is very important. Software tools for power system simulation like GE PSLF and Siemens PSSE already include algorithms for the load composite modeling. However, these tools require that the input information on composite load to be provided in custom formats. Preparation of this data is time consuming and requires multiple manual operations. The LMDT software enables to automate this process. Software is designed to generate composite load model data. It uses the default load composition data, motor information, and bus information as an input. Software processes the input information and produces load composition model. Generated model can be stored in .dyd format supported by GE PSLF package or .dyr format supported by Siemens PSSE package.
An improved method for determining force balance calibration accuracy
NASA Astrophysics Data System (ADS)
Ferris, Alice T.
The results of an improved statistical method used at Langley Research Center for determining and stating the accuracy of a force balance calibration are presented. The application of the method for initial loads, initial load determination, auxiliary loads, primary loads, and proof loads is described. The data analysis is briefly addressed.
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficient implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmore » implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.« less
NASA Technical Reports Server (NTRS)
1988-01-01
TherEx Inc.'s AT-1 Computerized Ataxiameter precisely evaluates posture and balance disturbances that commonly accompany neurological and musculoskeletal disorders. Complete system includes two-strain gauged footplates, signal conditioning circuitry, a computer monitor, printer and a stand-alone tiltable balance platform. AT-1 serves as assessment tool, treatment monitor, and rehabilitation training device. It allows clinician to document quantitatively the outcome of treatment and analyze data over time to develop outcome standards for several classifications of patients. It can evaluate specifically the effects of surgery, drug treatment, physical therapy or prosthetic devices.
ERIC Educational Resources Information Center
Mills, Allan
2014-01-01
Theory predicts that an egg-shaped body should rest in stable equilibrium when on its side, balance vertically in metastable equilibrium on its broad end and be completely unstable on its narrow end. A homogeneous solid egg made from wood, clay or plastic behaves in this way, but a real egg will not stand on either end. It is shown that this…
ERIC Educational Resources Information Center
O'Dell, Robin S.
2012-01-01
There are two primary interpretations of the mean: as a leveler of data (Uccellini 1996, pp. 113-114) and as a balance point of a data set. Typically, both interpretations of the mean are ignored in elementary school and middle school curricula. They are replaced with a rote emphasis on calculation using the standard algorithm. When students are…
Balance (or Vestibular) Rehabilitation
... for the Public / Hearing and Balance Balance (or Vestibular) Rehabilitation Audiologic (hearing), balance, and medical diagnostic tests help indicate whether you are a candidate for vestibular (balance) rehabilitation. Vestibular rehabilitation is an individualized balance ...
NASA Astrophysics Data System (ADS)
Shakerin, Said
2013-12-01
The ordinary 12-oz beverage cans in the figures below are not held up with any props or glue. The bottom of such cans is stepped at its circumference for better stacking. When this kind of can is tilted, as shown in Fig. 1, the outside corners of the step touch the surface beneath, providing an effective contact about 1 cm wide. Because the contact is relatively wide and the geometry is symmetrical, it is easy to balance an empty can by simply adding an appropriate amount of water so that the overall center of mass is located directly above the contact. In fact, any amount of water between about 40 and 210 mL will work. A computational animation of this trick by Sijia Liang and Bruce Atwood that shows center of mass as a function of amount of added water is available at http://demonstrations.wolfram.com. Once there, search "balancing can."
Grande, J A; Andújar, J M; Aroba, J; de la Torre, M L; Beltrán, R
2005-04-01
In the present work, Acid Mine Drainage (AMD) processes in the Chorrito Stream, which flows into the Cobica River (Iberian Pyrite Belt, Southwest Spain) are characterized by means of clustering techniques based on fuzzy logic. Also, pH behavior in contrast to precipitation is clearly explained, proving that the influence of rainfall inputs on the acidity and, as a result, on the metal load of a riverbed undergoing AMD processes highly depends on the moment when it occurs. In general, the riverbed dynamic behavior is the response to the sum of instant stimuli produced by isolated rainfall, the seasonal memory depending on the moment of the target hydrological year and, finally, the own inertia of the river basin, as a result of an accumulation process caused by age-long mining activity.
Grande, J A; Andújar, J M; Aroba, J; de la Torre, M L; Beltrán, R
2005-04-01
In the present work, Acid Mine Drainage (AMD) processes in the Chorrito Stream, which flows into the Cobica River (Iberian Pyrite Belt, Southwest Spain) are characterized by means of clustering techniques based on fuzzy logic. Also, pH behavior in contrast to precipitation is clearly explained, proving that the influence of rainfall inputs on the acidity and, as a result, on the metal load of a riverbed undergoing AMD processes highly depends on the moment when it occurs. In general, the riverbed dynamic behavior is the response to the sum of instant stimuli produced by isolated rainfall, the seasonal memory depending on the moment of the target hydrological year and, finally, the own inertia of the river basin, as a result of an accumulation process caused by age-long mining activity. PMID:15798799
Artificial neural network based hourly load forecasting for decentralized load management
Mandal, J.K.; Sinha, A.K.
1995-12-31
Decentralized load management is an essential part of the power system operation. Forecasting load demand at the substation level is generally more difficult and less accurate compared to forecasting total system load demand. In this paper, Multi-Layered Feed Forward (MLFF) neural network is used to predict the bus-load demand at the substation level. The MLFF network is trained using Back-Propagation (BP) algorithm with adaptive learning technique. The algorithm is tested for two systems having different load patterns.
NASA Astrophysics Data System (ADS)
Dilek, Murat
Distribution system analysis and design has experienced a gradual development over the past three decades. The once loosely assembled and largely ad hoc procedures have been progressing toward being well-organized. The increasing power of computers now allows for managing the large volumes of data and other obstacles inherent to distribution system studies. A variety of sophisticated optimization methods, which were impossible to conduct in the past, have been developed and successfully applied to distribution systems. Among the many procedures that deal with making decisions about the state and better operation of a distribution system, two decision support procedures will be addressed in this study: phase balancing and phase prediction. The former recommends re-phasing of single- and double-phase laterals in a radial distribution system in order to improve circuit loss while also maintaining/improving imbalances at various balance point locations. Phase balancing calculations are based on circuit loss information and current magnitudes that are calculated from a power flow solution. The phase balancing algorithm is designed to handle time-varying loads when evaluating phase moves that will result in improved circuit losses over all load points. Applied to radial distribution systems, the phase prediction algorithm attempts to predict the phases of single- and/or double phase laterals that have no phasing information previously recorded by the electric utility. In such an attempt, it uses available customer data and kW/kVar measurements taken at various locations in the system. It is shown that phase balancing is a special case of phase prediction. Building on the phase balancing and phase prediction design studies, this work introduces the concept of integrated design, an approach for coordinating the effects of various design calculations. Integrated design considers using results of multiple design applications rather than employing a single application for a
Microprocessor-Controlled Laser Balancing System
NASA Technical Reports Server (NTRS)
Demuth, R. S.
1985-01-01
Material removed by laser action as part tested for balance. Directed by microprocessor, laser fires appropriate amount of pulses in correct locations to remove necessary amount of material. Operator and microprocessor software interact through video screen and keypad; no programing skills or unprompted system-control decisions required. System provides complete and accurate balancing in single load-and-spinup cycle.
NASA Technical Reports Server (NTRS)
Duval, R. W.; Bahrami, M.
1985-01-01
The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.
Spletzer, B.L.
1998-12-15
A load cell combines the outputs of a plurality of strain gauges to measure components of an applied load. Combination of strain gauge outputs allows measurement of any of six load components without requiring complex machining or mechanical linkages to isolate load components. An example six axis load cell produces six independent analog outputs, each directly proportional to one of the six general load components. 16 figs.
Spletzer, Barry L.
2001-01-01
A load cell combines the outputs of a plurality of strain gauges to measure components of an applied load. Combination of strain gauge outputs allows measurement of any of six load components without requiring complex machining or mechanical linkages to isolate load components. An example six axis load cell produces six independent analog outputs which can be combined to determine any one of the six general load components.
Spletzer, Barry L.
1998-01-01
A load cell combines the outputs of a plurality of strain gauges to measure components of an applied load. Combination of strain gauge outputs allows measurement of any of six load components without requiring complex machining or mechanical linkages to isolate load components. An example six axis load cell produces six independent analog outputs, each directly proportional to one of the six general load components.
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
Liesen, R.J.; Strand, R.K.; Pedersen, C.O.
1998-10-01
Two new methods for calculating cooling loads have just been introduced. The first algorithm, called the heat balance (HB) method, is a complete formulation of fundamental heat balance principles. The second is called the radiant time series (RTS) method. While based on the HB method, the RTS method is an approximate procedure that separates some of the processes to better show the influence of individual heat gain components. In the HB method, all of the heat transfer mechanisms participate in three simultaneous heat balances: the balance on the outside face of all the building elements that enclose the space, the balance on the inside face of the building elements, and the balance between the surfaces inside the space and the zone air. The focus of this paper is on the second heat balance. It has been customary to define a radiative/convective split for the heat introduced into a zone from such sources as equipment, lights, people, etc. The radiative part is then distributed over the surfaces within the zone in some prescribed manner, and the convective part is assumed to go immediately into the air. Simplified techniques simply cannot accurately portray the complex interaction of building surfaces, so previously used load calculation procedures were not up to the task of analyzing the effect of internal load radiant/convective split variation. This paper will present an investigation of the influence of the radiative/convective split on cooling loads obtained using the heat balance procedure. It will begin with an overview of the model used for a heat balance procedure and then present an exhaustive case study of the effects of changing the mode split on load calculations for Wedge 1 of the Pentagon building.
Strain gage balances and buffet gages
NASA Technical Reports Server (NTRS)
Ferris, A. T.
1983-01-01
One-piece strain gage force balances were developed for use in the National Transonic Facility (NTF). This was accomplished by studying the effects of the cryogenic environment on materials, strain gages, cements, solders, and moisture proofing agents, and selecting those that minimized strain gage output changes due to temperature. In addition, because of the higher loads that may be imposed by the NTF, these balances are designed to carry a larger load for a given diameter than conventional balances. Full cryogenic calibrations were accomplished, and wind tunnel results that were obtained from the Langley 0-3-Meter Transonic Cryogenic Tunnel were used to verify laboratory test results.
Static calibration of the RSRA active-isolator rotor balance system
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
1987-01-01
The Rotor Systems Research Aircraft (RSRA) active-isolator system is designed to reduce rotor vibrations transmitted to the airframe and to simultaneously measure all six forces and moments generated by the rotor. These loads are measured by using a combination of load cells, strain gages, and hydropneumatic active isolators with built-in pressure gages. The first static calibration of the complete active-isolator rotor balance system was performed in l983 to verify its load-measurement capabilities. Analysis of the data included the use of multiple linear regressions to determine calibration matrices for different data sets and a hysteresis-removal algorithm to estimate in-flight measurement errors. Results showed that the active-isolator system can fulfill most performance predictions. The results also suggested several possible improvements to the system.
Beard, R.A.
1990-03-01
The purpose of this thesis is to explore the methods used to parallelize NP-complete problems and the degree of improvement that can be realized using a distributed parallel processor to solve these combinatoric problems. Common NP-complete problem characteristics such as a priori reductions, use of partial-state information, and inhomogeneous searches are identified and studied. The set covering problem (SCP) is implemented for this research because many applications such as information retrieval, task scheduling, and VLSI expression simplification can be structured as an SCP problem. In addition, its generic NP-complete common characteristics are well documented and a parallel implementation has not been reported. Parallel programming design techniques involve decomposing the problem and developing the parallel algorithms. The major components of a parallel solution are developed in a four phase process. First, a meta-level design is accomplished using an appropriate design language such as UNITY. Then, the UNITY design is transformed into an algorithm and implementation specific to a distributed architecture. Finally, a complexity analysis of the algorithm is performed. the a priori reductions are divided-and-conquer algorithms; whereas, the search for the optimal set cover is accomplished with a branch-and-bound algorithm. The search utilizes a global best cost maintained at a central location for distribution to all processors. Three methods of load balancing are implemented and studied: coarse grain with static allocation of the search space, fine grain with dynamic allocation, and dynamic load balancing.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Ferris, A. T. Judy
1999-01-01
This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.
NASA Technical Reports Server (NTRS)
Tielking, John T.
1989-01-01
Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.
14 CFR 23.427 - Unsymmetrical loads.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Balancing Surfaces § 23.427 Unsymmetrical loads. (a) Horizontal surfaces other than main wing and their supporting structure must be designed for unsymmetrical loads arising from yawing and slipstream effects, in... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Unsymmetrical loads. 23.427 Section...
14 CFR 23.427 - Unsymmetrical loads.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Balancing Surfaces § 23.427 Unsymmetrical loads. (a) Horizontal surfaces other than main wing and their supporting structure must be designed for unsymmetrical loads arising from yawing and slipstream effects, in... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Unsymmetrical loads. 23.427 Section...
Structural dynamics payload loads estimates
NASA Technical Reports Server (NTRS)
Engels, R. C.
1982-01-01
Methods for the prediction of loads on large space structures are discussed. Existing approaches to the problem of loads calculation are surveyed. A full scale version of an alternate numerical integration technique to solve the response part of a load cycle is presented, and a set of short cut versions of the algorithm developed. The implementation of these techniques using the software package developed is discussed.
Dynamic Layered Dual-Cluster Heads Routing Algorithm Based on Krill Herd Optimization in UWSNs
Jiang, Peng; Feng, Yang; Wu, Feng; Yu, Shanen; Xu, Huan
2016-01-01
Aimed at the limited energy of nodes in underwater wireless sensor networks (UWSNs) and the heavy load of cluster heads in clustering routing algorithms, this paper proposes a dynamic layered dual-cluster routing algorithm based on Krill Herd optimization in UWSNs. Cluster size is first decided by the distance between the cluster head nodes and sink node, and a dynamic layered mechanism is established to avoid the repeated selection of the same cluster head nodes. Using Krill Herd optimization algorithm selects the optimal and second optimal cluster heads, and its Lagrange model directs nodes to a high likelihood area. It ultimately realizes the functions of data collection and data transition. The simulation results show that the proposed algorithm can effectively decrease cluster energy consumption, balance the network energy consumption, and prolong the network lifetime. PMID:27589744
Dynamic Layered Dual-Cluster Heads Routing Algorithm Based on Krill Herd Optimization in UWSNs.
Jiang, Peng; Feng, Yang; Wu, Feng; Yu, Shanen; Xu, Huan
2016-01-01
Aimed at the limited energy of nodes in underwater wireless sensor networks (UWSNs) and the heavy load of cluster heads in clustering routing algorithms, this paper proposes a dynamic layered dual-cluster routing algorithm based on Krill Herd optimization in UWSNs. Cluster size is first decided by the distance between the cluster head nodes and sink node, and a dynamic layered mechanism is established to avoid the repeated selection of the same cluster head nodes. Using Krill Herd optimization algorithm selects the optimal and second optimal cluster heads, and its Lagrange model directs nodes to a high likelihood area. It ultimately realizes the functions of data collection and data transition. The simulation results show that the proposed algorithm can effectively decrease cluster energy consumption, balance the network energy consumption, and prolong the network lifetime. PMID:27589744
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2011-12-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
A swarm intelligence based memetic algorithm for task allocation in distributed systems
NASA Astrophysics Data System (ADS)
Sarvizadeh, Raheleh; Haghi Kashani, Mostafa
2012-01-01
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.
Balanced input-output assignment
NASA Technical Reports Server (NTRS)
Gawronski, W.; Hadaegh, F. Y.
1989-01-01
Actuator/sensor locations and balanced representations of linear systems are considered for a given set of controllability and observability grammians. The case of equally controlled and observed states is given special attention. The assignability of grammians is examined, and the conditions for their existence are presented, along with several algorithms for their determination. Although an arbitrary positive semidefinite matrix is not always assignable, the identity grammian is shown to be always assignable. The results are extended to the case of flexible structures.
Spectral element methods - Algorithms and architectures
NASA Technical Reports Server (NTRS)
Fischer, Paul; Ronquist, Einar M.; Dewey, Daniel; Patera, Anthony T.
1988-01-01
Spectral element methods are high-order weighted residual techniques for partial differential equations that combine the geometric flexibility of finite element methods with the rapid convergence of spectral techniques. Spectral element methods are described for the simulation of incompressible fluid flows, with special emphasis on implementation of spectral element techniques on medium-grained parallel processors. Two parallel architectures are considered; the first, a commercially available message-passing hypercube system; the second, a developmental reconfigurable architecture based on Geometry-Defining Processors. High parallel efficiency is obtained in hypercube spectral element computations, indicating that load balancing and communication issues can be successfully addressed by a high-order technique/medium-grained processor algorithm-architecture coupling.
Spectral element methods: Algorithms and architectures
NASA Technical Reports Server (NTRS)
Fischer, Paul; Ronquist, Einar M.; Dewey, Daniel; Patera, Anthony T.
1988-01-01
Spectral element methods are high-order weighted residual techniques for partial differential equations that combine the geometric flexibility of finite element methods with the rapid convergence of spectral techniques. Spectral element methods are described for the simulation of incompressible fluid flows, with special emphasis on implementation of spectral element techniques on medium-grained parallel processors. Two parallel architectures are considered: the first, a commercially available message-passing hypercube system; the second, a developmental reconfigurable architecture based on Geometry-Defining Processors. High parallel efficiency is obtained in hypercube spectral element computations, indicating that load balancing and communication issues can be successfully addressed by a high-order technique/medium-grained processor algorithm-architecture coupling.
BAS: balanced acceptance sampling of natural resources.
Robertson, B L; Brown, J A; McDonald, T; Jaksons, P
2013-09-01
To design an efficient survey or monitoring program for a natural resource it is important to consider the spatial distribution of the resource. Generally, sample designs that are spatially balanced are more efficient than designs which are not. A spatially balanced design selects a sample that is evenly distributed over the extent of the resource. In this article we present a new spatially balanced design that can be used to select a sample from discrete and continuous populations in multi-dimensional space. The design, which we call balanced acceptance sampling, utilizes the Halton sequence to assure spatial diversity of selected locations. Targeted inclusion probabilities are achieved by acceptance sampling. The BAS design is conceptually simpler than competing spatially balanced designs, executes faster, and achieves better spatial balance as measured by a number of quantities. The algorithm has been programed in an R package freely available for download.
Parallel Computing Strategies for Irregular Algorithms
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)
2002-01-01
Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.
Ocean Tide Loading Computation
NASA Technical Reports Server (NTRS)
Agnew, Duncan Carr
2005-01-01
September 15,2003 through May 15,2005 This grant funds the maintenance, updating, and distribution of programs for computing ocean tide loading, to enable the corrections for such loading to be more widely applied in space- geodetic and gravity measurements. These programs, developed under funding from the CDP and DOSE programs, incorporate the most recent global tidal models developed from Topex/Poscidon data, and also local tide models for regions around North America; the design of the algorithm and software makes it straightforward to combine local and global models.
Strain-gage applications in wind tunnel balances
NASA Astrophysics Data System (ADS)
Mole, P. J.
1990-10-01
Six-component balances used in wind tunnels for precision measurements of air loads on scale models of aircraft and missiles are reviewed. A beam moment-type balance, two-shell balance consisting of an outer shell and inner rod, and air-flow balances used in STOL aircraft configurations are described. The design process, fabrication, gaging, single-gage procedure, and calibration of balances are outlined, and emphasis is placed on computer stress programs and data-reduction computer programs. It is pointed out that these wind-tunnel balances are used in applications for full-scale flight vehicles. Attention is given to a standard two-shell booster balance and an adaptation of a wind-tunnel balance employed to measure the simulated distributed launch loads of a payload in the Space Shuttle.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
A new distributed systems scheduling algorithm: a swarm intelligence approach
NASA Astrophysics Data System (ADS)
Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi
2011-12-01
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
Crane, N K; Parsons, I D; Hjelmstad, K D
2002-03-21
Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.
Strain-gage balance calibration of a magnetic suspension and balance system
NASA Astrophysics Data System (ADS)
Roberts, Paul W.; Tcheng, Ping
A load calibration of the NASA 13-in magnetic suspension and balance system (MSBS) is described. The calibration procedure was originally intended to establish the empirical relationship between the coil currents and the external loads (forces and moments) applied to a magnetically suspended calibrator. However, it was discovered that the performance of a strain-gage balance is not affected when subjected to the magnetic environment of the MSBS. The use of strain-gage balances greatly reduces the effort required to perform a current-vs.-load calibration as external loads can be directly inferred from the balance outputs while a calibrator is suspended in MSBS. It is conceivable that in the future such a calibration could become unnecessary, since an even more important application for the use of a strain-gage balance in MSBS environment is the acquisition of precision aerodynamic force and moment data by telemetering the balance outputs from a suspended model/core/balance during wind tunnel tests.
Strain-gage balance calibration of a magnetic suspension and balance system
NASA Technical Reports Server (NTRS)
Roberts, Paul W.; Tcheng, Ping
1987-01-01
A load calibration of the NASA 13-in magnetic suspension and balance system (MSBS) is described. The calibration procedure was originally intended to establish the empirical relationship between the coil currents and the external loads (forces and moments) applied to a magnetically suspended calibrator. However, it was discovered that the performance of a strain-gage balance is not affected when subjected to the magnetic environment of the MSBS. The use of strain-gage balances greatly reduces the effort required to perform a current-vs.-load calibration as external loads can be directly inferred from the balance outputs while a calibrator is suspended in MSBS. It is conceivable that in the future such a calibration could become unnecessary, since an even more important application for the use of a strain-gage balance in MSBS environment is the acquisition of precision aerodynamic force and moment data by telemetering the balance outputs from a suspended model/core/balance during wind tunnel tests.
Automated load management for spacecraft power systems
NASA Technical Reports Server (NTRS)
Lollar, Louis F.
1987-01-01
An account is given of the results of a study undertaken by NASA's Marshall Space Flight Center to design and implement the load management techniques for autonomous spacecraft power systems, such as the Autonomously Managed Power System Test Facility. Attention is given to four load-management criteria, which encompass power bus balancing on multichannel power systems, energy balancing in such systems, power quality matching of loads to buses, and contingency load shedding/adding. Full implementation of these criteria calls for the addition of a second power channel.
Parallel global optimization with the particle swarm algorithm.
Schutte, J F; Reinbolt, J A; Fregly, B J; Haftka, R T; George, A D
2004-12-01
Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima-large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available.
Parallel global optimization with the particle swarm algorithm
Schutte, J. F.; Reinbolt, J. A.; Fregly, B. J.; Haftka, R. T.; George, A. D.
2007-01-01
SUMMARY Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima—large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available. PMID:17891226
A simple method for wind tunnel balance calibration including non-linear interaction terms
NASA Astrophysics Data System (ADS)
Ramaswamy, M. A.; Srinivas, T.; Holla, V. S.
The conventional method for calibrating wind tunnel balances to obtain the coupled linear and nonlinear interaction terms requires the application of combinations of pure components of the loads on the calibration body compensating the deflection of the balance. For a six-component balance, this calls for a complex loading system and an arrangement to translate and tilt the balance support about all three axes. A simple method called the least-square method is illustrated for a three-component balance. The simplicity arises from the fact that application of the pure components of the loads or reorientation of the balance is not required. A single load is applied that has various components whose magnitudes can be easily found knowing the orientation of the calibration body under load and the point of application of the load. The coefficients are obtained by using the least-square-fit approach to match the outputs obtained for various combinations of load.
Coupled cluster algorithms for networks of shared memory parallel processors
NASA Astrophysics Data System (ADS)
Bentz, Jonathan L.; Olson, Ryan M.; Gordon, Mark S.; Schmidt, Michael W.; Kendall, Ricky A.
2007-05-01
As the popularity of using SMP systems as the building blocks for high performance supercomputers increases, so too increases the need for applications that can utilize the multiple levels of parallelism available in clusters of SMPs. This paper presents a dual-layer distributed algorithm, using both shared-memory and distributed-memory techniques to parallelize a very important algorithm (often called the "gold standard") used in computational chemistry, the single and double excitation coupled cluster method with perturbative triples, i.e. CCSD(T). The algorithm is presented within the framework of the GAMESS [M.W. Schmidt, K.K. Baldridge, J.A. Boatz, S.T. Elbert, M.S. Gordon, J.J. Jensen, S. Koseki, N. Matsunaga, K.A. Nguyen, S. Su, T.L. Windus, M. Dupuis, J.A. Montgomery, General atomic and molecular electronic structure system, J. Comput. Chem. 14 (1993) 1347-1363]. (General Atomic and Molecular Electronic Structure System) program suite and the Distributed Data Interface [M.W. Schmidt, G.D. Fletcher, B.M. Bode, M.S. Gordon, The distributed data interface in GAMESS, Comput. Phys. Comm. 128 (2000) 190]. (DDI), however, the essential features of the algorithm (data distribution, load-balancing and communication overhead) can be applied to more general computational problems. Timing and performance data for our dual-level algorithm is presented on several large-scale clusters of SMPs.
Partial storage optimization and load control strategy of cloud data centers.
Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444
Partial storage optimization and load control strategy of cloud data centers.
Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.
Partial Storage Optimization and Load Control Strategy of Cloud Data Centers
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444
Split torque transmission load sharing
NASA Technical Reports Server (NTRS)
Krantz, T. L.; Rashidi, M.; Kish, J. G.
1992-01-01
Split torque transmissions are attractive alternatives to conventional planetary designs for helicopter transmissions. The split torque designs can offer lighter weight and fewer parts but have not been used extensively for lack of experience, especially with obtaining proper load sharing. Two split torque designs that use different load sharing methods have been studied. Precise indexing and alignment of the geartrain to produce acceptable load sharing has been demonstrated. An elastomeric torque splitter that has large torsional compliance and damping produces even better load sharing while reducing dynamic transmission error and noise. However, the elastomeric torque splitter as now configured is not capable over the full range of operating conditions of a fielded system. A thrust balancing load sharing device was evaluated. Friction forces that oppose the motion of the balance mechanism are significant. A static analysis suggests increasing the helix angle of the input pinion of the thrust balancing design. Also, dynamic analysis of this design predicts good load sharing and significant torsional response to accumulative pitch errors of the gears.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
NASA Astrophysics Data System (ADS)
Leduhovsky, G. V.; Zhukov, V. P.; Barochkin, E. V.; Zimin, A. P.; Razinkov, A. A.
2015-08-01
The problem of striking material and energy balances from the data received by thermal power plant computerized automation systems from the technical accounting systems with the accuracy determined by the metrological characteristics of serviceable calibrated instruments is formulated using the mathematical apparatus of ridge regression method. A graph theory based matrix model of material and energy flows in systems having an intricate structure is proposed, using which it is possible to formalize the solution of a particular practical problem at the stage of constructing the system model. The problem of striking material and energy balances is formulated taking into account different degrees of trustworthiness with which the initial flow rates of coolants and their thermophysical parameters were determined, as well as process constraints expressed in terms of balance correlations on mass and energy for individual system nodes or for any combination thereof. Analytic and numerical solutions of the problem are proposed in different versions of its statement differing from each other in the adopted assumptions and considered constraints. It is shown how the procedure for striking material and energy balances from the results of measuring the flows of feed water and steam in the thermal process circuit of a combined heat and power plant affects the calculation accuracy of specific fuel rates for supplying heat and electricity. It has been revealed that the nominal values of indicators and the fuel saving or overexpenditure values associated with these indicators are the most dependent parameters. In calculating these quantities using different balance striking procedures, an error may arise the value of which is commensurable with the power plant thermal efficiency margin stipulated by the regulatory-technical documents on using fuel. The study results were used for substantiating the choice of stating the problem of striking material and fuel balances, as well as
ERIC Educational Resources Information Center
White, Richard
2007-01-01
The review by Black and Wiliam of national systems makes clear the complexity of assessment, and identifies important issues. One of these is "balance": balance between local and central responsibilities, balance between the weights given to various purposes of schooling, balance between weights for various functions of assessment, and balance…
Dynamic balance improvement program
NASA Technical Reports Server (NTRS)
Butner, M. F.
1983-01-01
The reduction of residual unbalance in the space shuttle main engine (SSME) high pressure turbopump rotors was addressed. Elastic rotor response to unbalance and balancing requirements, multiplane and in housing balancing, and balance related rotor design considerations were assessed. Recommendations are made for near term improvement of the SSME balancing and for future study and development efforts.
ERIC Educational Resources Information Center
Claxton, David B.; Troy, Maridy; Dupree, Sarah
2006-01-01
Most authorities consider balance to be a component of skill-related physical fitness. Balance, however, is directly related to health, especially for older adults. Falls are a leading cause of injury and death among the elderly. Improved balance can help reduce falls and contribute to older people remaining physically active. Balance is a…
NASA Technical Reports Server (NTRS)
Simkovich, A.; Baumann, Robert C.
1961-01-01
The Vanguard satellites and component parts were balanced within the specified limits by using a Gisholt Type-S balancer in combination with a portable International Research and Development vibration analyzer and filter, with low-frequency pickups. Equipment and procedures used for balancing are described; and the determination of residual imbalance is accomplished by two methods: calculation, and graphical interpretation. Between-the-bearings balancing is recommended for future balancing of payloads.
Boman, Erik G.; Catalyurek, Umit V.; Chevalier, Cedric; Devine, Karen D.; Gebremedhin, Assefaw H.; Hovland, Paul D.; Pothen, Alex; Rajamanickam, Sivasankaran; Safro, Ilya; Wolf, Michael M.; Zhou, Min
2015-01-16
This final progress report summarizes the work accomplished at the Combinatorial Scientific Computing and Petascale Simulations Institute. We developed Zoltan, a parallel mesh partitioning library that made use of accurate hypergraph models to provide load balancing in mesh-based computations. We developed several graph coloring algorithms for computing Jacobian and Hessian matrices and organized them into a software package called ColPack. We developed parallel algorithms for graph coloring and graph matching problems, and also designed multi-scale graph algorithms. Three PhD students graduated, six more are continuing their PhD studies, and four postdoctoral scholars were advised. Six of these students and Fellows have joined DOE Labs (Sandia, Berkeley), as staff scientists or as postdoctoral scientists. We also organized the SIAM Workshop on Combinatorial Scientific Computing (CSC) in 2007, 2009, and 2011 to continue to foster the CSC community.
Validation of a robotic balance system for investigations in the control of human standing balance.
Luu, Billy L; Huryn, Thomas P; Van der Loos, H F Machiel; Croft, Elizabeth A; Blouin, Jean-Sébastien
2011-08-01
Previous studies have shown that human body sway during standing approximates the mechanics of an inverted pendulum pivoted at the ankle joints. In this study, a robotic balance system incorporating a Stewart platform base was developed to provide a new technique to investigate the neural mechanisms involved in standing balance. The robotic system, programmed with the mechanics of an inverted pendulum, controlled the motion of the body in response to a change in applied ankle torque. The ability of the robotic system to replicate the load properties of standing was validated by comparing the load stiffness generated when subjects balanced their own body to the robot's mechanical load programmed with a low (concentrated-mass model) or high (distributed-mass model) inertia. The results show that static load stiffness was not significantly (p > 0.05) different for standing and the robotic system. Dynamic load stiffness for the robotic system increased with the frequency of sway, as predicted by the mechanics of an inverted pendulum, with the higher inertia being accurately matched to the load properties of the human body. This robotic balance system accurately replicated the physical model of standing and represents a useful tool to simulate the dynamics of a standing person.
Computer Applications in Balancing Chemical Equations.
ERIC Educational Resources Information Center
Kumar, David D.
2001-01-01
Discusses computer-based approaches to balancing chemical equations. Surveys 13 methods, 6 based on matrix, 2 interactive programs, 1 stand-alone system, 1 developed in algorithm in Basic, 1 based on design engineering, 1 written in HyperCard, and 1 prepared for the World Wide Web. (Contains 17 references.) (Author/YDS)
An efficient QoS-aware routing algorithm for LEO polar constellations
NASA Astrophysics Data System (ADS)
Tian, Xin; Pham, Khanh; Blasch, Erik; Tian, Zhi; Shen, Dan; Chen, Genshe
2013-05-01
In this work, a Quality of Service (QoS)-aware routing (QAR) algorithm is developed for Low-Earth Orbit (LEO) polar constellations. LEO polar orbits are the only type of satellite constellations where inter-plane inter-satellite links (ISLs) are implemented in real world. The QAR algorithm exploits features of the topology of the LEO satellite constellation, which makes it more efficient than general shortest path routing algorithms such as Dijkstra's or extended Bellman-Ford algorithms. Traffic density, priority, and error QoS requirements on communication delays can be easily incorporated into the QAR algorithm through satellite distances. The QAR algorithm also supports efficient load balancing in the satellite network by utilizing the multiple paths from the source satellite to the destination satellite, and effectively lowers the rate of network congestion. The QAR algorithm supports a novel robust routing scheme in LEO polar constellation, which is able to significantly reduce the impact of inter-satellite link (ISL) congestions on QoS in terms of communication delay and jitter.
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database
Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; Shephard, Mark S.
2013-01-01
Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
The water balance of the Skylab crew was analyzed. Evaporative water loss using a whole body input/output balance equation, water, body tissue, and energy balance was analyzed. The approach utilizes the results of several major Skylab medical experiments. Subsystems were designed for the use of the software necessary for the analysis. A partitional water balance that graphically depicts the changes due to water intake is presented. The energy balance analysis determines the net available energy to the individual crewman during any period. The balances produce a visual description of the total change of a particular body component during the course of the mission. The information is salvaged from metabolic balance data if certain techniques are used to reduce errors inherent in the balance method.
Polarization-balanced beamsplitter
Decker, D.E.
1998-02-17
A beamsplitter assembly is disclosed that includes several beamsplitter cubes arranged to define a plurality of polarization-balanced light paths. Each polarization-balanced light path contains one or more balanced pairs of light paths, where each balanced pair of light paths includes either two transmission light paths with orthogonal polarization effects or two reflection light paths with orthogonal polarization effects. The orthogonal pairing of said transmission and reflection light paths cancels polarization effects otherwise caused by beamsplitting. 10 figs.
Polarization-balanced beamsplitter
Decker, Derek E.
1998-01-01
A beamsplitter assembly that includes several beamsplitter cubes arranged to define a plurality of polarization-balanced light paths. Each polarization-balanced light path contains one or more balanced pairs of light paths, where each balanced pair of light paths includes either two transmission light paths with orthogonal polarization effects or two reflection light paths with orthogonal polarization effects. The orthogonal pairing of said transmission and reflection light paths cancels polarization effects otherwise caused by beamsplitting.
ERIC Educational Resources Information Center
Haddock, Rebecca Jaurigue
Today work goes on 24 hours a day, 7 days a week, and is about acceleration and access. Workers need balance more than ever. In fact, recent college graduates value work/life balance as their key factor in selecting employers. This paper, written for career counselors, defines balance as encompassing emotional, spiritual, physical, and…
1994 Pacific Northwest Loads and Resources Study.
United States. Bonneville Power Administration.
1994-12-01
The 1994 Pacific Northwest Loads and Resources Study presented herein establishes a picture of how the agency is positioned today in its loads and resources balance. It is a snapshot of expected resource operation, contractual obligations, and rights. This study does not attempt to present or analyze future conservation or generation resource scenarios. What it does provide are base case assumptions from which scenarios encompassing a wide range of uncertainties about BPA`s future may be evaluated. The Loads and Resources Study is presented in two documents: (1) this summary of Federal system and Pacific Northwest region loads and resources and (2) a technical appendix detailing the loads and resources for each major Pacific Northwest generating utility. This analysis updates the 1993 Pacific Northwest Loads and Resources Study, published in December 1993. In this loads and resources study, resource availability is compared with a range of forecasted electricity consumption. The Federal system and regional analyses for medium load forecast are presented.
Ohlinger, L.A.
1958-10-01
A device is presented for loading or charging bodies of fissionable material into a reactor. This device consists of a car, mounted on tracks, into which the fissionable materials may be placed at a remote area, transported to the reactor, and inserted without danger to the operating personnel. The car has mounted on it a heavily shielded magazine for holding a number of the radioactive bodies. The magazine is of a U-shaped configuration and is inclined to the horizontal plane, with a cap covering the elevated open end, and a remotely operated plunger at the lower, closed end. After the fissionable bodies are loaded in the magazine and transported to the reactor, the plunger inserts the body at the lower end of the magazine into the reactor, then is withdrawn, thereby allowing gravity to roll the remaining bodies into position for successive loading in a similar manner.
Reconceptualizing balance: attributes associated with balance performance.
Thomas, Julia C; Odonkor, Charles; Griffith, Laura; Holt, Nicole; Percac-Lima, Sanja; Leveille, Suzanne; Ni, Pensheng; Latham, Nancy K; Jette, Alan M; Bean, Jonathan F
2014-09-01
Balance tests are commonly used to screen for impairments that put older adults at risk for falls. The purpose of this study was to determine the attributes that were associated with balance performance as measured by the Frailty and Injuries: Cooperative Studies of Intervention Techniques (FICSIT) balance test. This study was a cross-sectional secondary analysis of baseline data from a longitudinal cohort study, the Boston Rehabilitative Impairment Study of the Elderly (Boston RISE). Boston RISE was performed in an outpatient rehabilitation research center and evaluated Boston area primary care patients aged 65 to 96 (N=364) with self-reported difficulty or task-modification climbing a flight of stairs or walking 1/2 of a mile. The outcome measure was standing balance as measured by the FICSIT-4 balance assessment. Other measures included: self-efficacy, pain, depression, executive function, vision, sensory loss, reaction time, kyphosis, leg range of motion, trunk extensor muscle endurance, leg strength and leg velocity at peak power. Participants were 67% female, had an average age of 76.5 (±7.0) years, an average of 4.1 (±2.0) chronic conditions, and an average FICSIT-4 score of 6.7 (±2.2) out of 9. After adjusting for age and gender, attributes significantly associated with balance performance were falls self-efficacy, trunk extensor muscle endurance, sensory loss, and leg velocity at peak power. FICSIT-4 balance performance is associated with a number of behavioral and physiologic attributes, many of which are amenable to rehabilitative treatment. Our findings support a consideration of balance as multidimensional activity as proposed by the current International Classification of Functioning, Disability, and Health (ICF) model. PMID:24952097
ERIC Educational Resources Information Center
Csernus, Marilyn
Carbohydrate loading is a frequently used technique to improve performance by altering an athlete's diet. The objective is to increase glycogen stored in muscles for use in prolonged strenuous exercise. For two to three days, the athlete consumes a diet that is low in carbohydrates and high in fat and protein while continuing to exercise and…
Algorithms for parallel flow solvers on message passing architectures
NASA Astrophysics Data System (ADS)
Vanderwijngaart, Rob F.
1995-01-01
The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those
An Efficient Distributed Algorithm for Constructing Spanning Trees in Wireless Sensor Networks
Lachowski, Rosana; Pellenz, Marcelo E.; Penna, Manoel C.; Jamhour, Edgard; Souza, Richard D.
2015-01-01
Monitoring and data collection are the two main functions in wireless sensor networks (WSNs). Collected data are generally transmitted via multihop communication to a special node, called the sink. While in a typical WSN, nodes have a sink node as the final destination for the data traffic, in an ad hoc network, nodes need to communicate with each other. For this reason, routing protocols for ad hoc networks are inefficient for WSNs. Trees, on the other hand, are classic routing structures explicitly or implicitly used in WSNs. In this work, we implement and evaluate distributed algorithms for constructing routing trees in WSNs described in the literature. After identifying the drawbacks and advantages of these algorithms, we propose a new algorithm for constructing spanning trees in WSNs. The performance of the proposed algorithm and the quality of the constructed tree were evaluated in different network scenarios. The results showed that the proposed algorithm is a more efficient solution. Furthermore, the algorithm provides multiple routes to the sensor nodes to be used as mechanisms for fault tolerance and load balancing. PMID:25594593
An efficient distributed algorithm for constructing spanning trees in wireless sensor networks.
Lachowski, Rosana; Pellenz, Marcelo E; Penna, Manoel C; Jamhour, Edgard; Souza, Richard D
2015-01-01
Monitoring and data collection are the two main functions in wireless sensor networks (WSNs). Collected data are generally transmitted via multihop communication to a special node, called the sink. While in a typical WSN, nodes have a sink node as the final destination for the data traffic, in an ad hoc network, nodes need to communicate with each other. For this reason, routing protocols for ad hoc networks are inefficient for WSNs. Trees, on the other hand, are classic routing structures explicitly or implicitly used in WSNs. In this work, we implement and evaluate distributed algorithms for constructing routing trees in WSNs described in the literature. After identifying the drawbacks and advantages of these algorithms, we propose a new algorithm for constructing spanning trees in WSNs. The performance of the proposed algorithm and the quality of the constructed tree were evaluated in different network scenarios. The results showed that the proposed algorithm is a more efficient solution. Furthermore, the algorithm provides multiple routes to the sensor nodes to be used as mechanisms for fault tolerance and load balancing.
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-01-01
unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. Currently, uncertainties associated with wind and load forecasts, as well as uncertainties associated with random generator outages and unexpected disconnection of supply lines, are not taken into account in power grid operation. Thus, operators have little means to weigh the likelihood and magnitude of upcoming events of power imbalance. In this project, funded by the U.S. Department of Energy (DOE), a framework has been developed for incorporating uncertainties associated with wind and load forecast errors, unpredicted ramps, and forced generation disconnections into the energy management system (EMS) as well as generation dispatch and commitment applications. A new approach to evaluate the uncertainty ranges for the required generation performance envelope including balancing capacity, ramping capability, and ramp duration has been proposed. The approach includes three stages: forecast and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence levels. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis, incorporating all sources of uncertainties of both continuous (wind and load forecast errors) and discrete (forced generator outages and start-up failures) nature. A new method called the “flying brick” technique has been developed to evaluate the look-ahead required generation performance envelope for the worst case scenario within a user-specified confidence level. A self-validation algorithm has been developed to validate the accuracy of the confidence intervals.
An improved scheduling algorithm for 3D cluster rendering with platform LSF
NASA Astrophysics Data System (ADS)
Xu, Wenli; Zhu, Yi; Zhang, Liping
2013-10-01
High-quality photorealistic rendering of 3D modeling needs powerful computing systems. On this demand highly efficient management of cluster resources develops fast to exert advantages. This paper is absorbed in the aim of how to improve the efficiency of 3D rendering tasks in cluster. It focuses research on a dynamic feedback load balance (DFLB) algorithm, the work principle of load sharing facility (LSF) and optimization of external scheduler plug-in. The algorithm can be applied into match and allocation phase of a scheduling cycle. Candidate hosts is prepared in sequence in match phase. And the scheduler makes allocation decisions for each job in allocation phase. With the dynamic mechanism, new weight is assigned to each candidate host for rearrangement. The most suitable one will be dispatched for rendering. A new plugin module of this algorithm has been designed and integrated into the internal scheduler. Simulation experiments demonstrate the ability of improved plugin module is superior to the default one for rendering tasks. It can help avoid load imbalance among servers, increase system throughput and improve system utilization.
Loading relativistic Maxwell distributions in particle simulations
Zenitani, Seiji
2015-04-15
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ≈50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Aging effects on the structure underlying balance abilities tests.
Urushihata, Toshiya; Kinugasa, Takashi; Soma, Yuki; Miyoshi, Hirokazu
2010-01-01
Balance impairment is one of the biggest risk factors for falls reducing inactivity, resulting in nursing care. Therefore, balance ability is crucial to maintain the activities of independent daily living of older adults. Many tests to assess balance ability have been developed. However, few reports reveal the structure underlying results of balance performance tests comparing young and older adults. Covariance structure analysis is a tool that is used to test statistically whether factorial structure fits data. This study examined aging effects on the factorial structure underlying balance performance tests. Participants comprised 60 healthy young women aged 22 ± 3 years (young group) and 60 community-dwelling older women aged 69 ± 5 years (older group). Six balance tests: postural sway, one-leg standing, functional reach, timed up and go (TUG), gait, and the EquiTest were employed. Exploratory factor analysis revealed that three clearly interpretable factors were extracted in the young group. The first factor had high loadings on the EquiTest, and was interpreted as 'Reactive'. The second factor had high loadings on the postural sway test, and was interpreted as 'Static'. The third factor had high loadings on TUG and gait test, and was interpreted as 'Dynamic'. Similarly, three interpretable factors were extracted in the older group. The first factor had high loadings on the postural sway test and the EquiTest and therefore was interpreted as 'Static and Reactive'. The second factor, which had high loadings on the EquiTest, was interpreted as 'Reactive'. The third factor, which had high loadings on TUG and the gait test, was interpreted as 'Dynamic'. A covariance structure model was applied to the test data: the second-order factor was balance ability, and the first-order factors were static, dynamic and reactive factors which were assumed to be measured based on the six balance tests. Goodness-of-fit index (GFI) of the models were acceptable (young group, GFI
Aging Effects on the Structure Underlying Balance Abilities Tests
Kinugasa, Takashi; Soma, Yuki; Miyoshi, Hirokazu
2010-01-01
Balance impairment is one of the biggest risk factors for falls reducing inactivity, resulting in nursing care. Therefore, balance ability is crucial to maintain the activities of independent daily living of older adults. Many tests to assess balance ability have been developed. However, few reports reveal the structure underlying results of balance performance tests comparing young and older adults. Covariance structure analysis is a tool that is used to test statistically whether factorial structure fits data. This study examined aging effects on the factorial structure underlying balance performance tests. Participants comprised 60 healthy young women aged 22 ± 3 years (young group) and 60 community-dwelling older women aged 69 ± 5 years (older group). Six balance tests: postural sway, one-leg standing, functional reach, timed up and go (TUG), gait, and the EquiTest were employed. Exploratory factor analysis revealed that three clearly interpretable factors were extracted in the young group. The first factor had high loadings on the EquiTest, and was interpreted as ‘Reactive’. The second factor had high loadings on the postural sway test, and was interpreted as ‘Static’. The third factor had high loadings on TUG and gait test, and was interpreted as ‘Dynamic’. Similarly, three interpretable factors were extracted in the older group. The first factor had high loadings on the postural sway test and the EquiTest and therefore was interpreted as ‘Static and Reactive’. The second factor, which had high loadings on the EquiTest, was interpreted as ‘Reactive’. The third factor, which had high loadings on TUG and the gait test, was interpreted as ‘Dynamic’. A covariance structure model was applied to the test data: the second-order factor was balance ability, and the first-order factors were static, dynamic and reactive factors which were assumed to be measured based on the six balance tests. Goodness-of-fit index (GFI) of the models were
Efficient bulk-loading of gridfiles
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Nicol, David M.
1994-01-01
This paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows.
Mullett, L.B.; Loach, B.G.; Adams, G.L.
1958-06-24
>Loaded waveguides are described for the propagation of electromagnetic waves with reduced phase velocities. A rectangular waveguide is dimensioned so as to cut-off the simple H/sub 01/ mode at the operating frequency. The waveguide is capacitance loaded, so as to reduce the phase velocity of the transmitted wave, by connecting an electrical conductor between directly opposite points in the major median plane on the narrower pair of waveguide walls. This conductor may take a corrugated shape or be an aperature member, the important factor being that the electrical length of the conductor is greater than one-half wavelength at the operating frequency. Prepared for the Second U.N. International ConferThe importance of nuclear standards is duscussed. A brief review of the international callaboration in this field is given. The proposal is made to let the International Organization for Standardization (ISO) coordinate the efforts from other groups. (W.D.M.)
Implementation and performance of a domain decomposition algorithm in Sisal
DeBoni, T.; Feo, J.; Rodrigue, G.; Muller, J.
1993-09-23
Sisal is a general-purpose functional language that hides the complexity of parallel processing, expedites parallel program development, and guarantees determinacy. Parallelism and management of concurrent tasks are realized automatically by the compiler and runtime system. Spatial domain decomposition is a widely-used method that focuses computational resources on the most active, or important, areas of a domain. Many complex programming issues are introduced in paralleling this method including: dynamic spatial refinement, dynamic grid partitioning and fusion, task distribution, data distribution, and load balancing. In this paper, we describe a spatial domain decomposition algorithm programmed in Sisal. We explain the compilation process, and present the execution performance of the resultant code on two different multiprocessor systems: a multiprocessor vector supercomputer, and cache-coherent scalar multiprocessor.
Maximizing TDRS Command Load Lifetime
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2002-01-01
The GNC software onboard ISS utilizes TORS command loads, and a simplistic model of TORS orbital motion to generate onboard TORS state vectors. Each TORS command load contains five "invariant" orbital elements which serve as inputs to the onboard propagation algorithm. These elements include semi-major axis, inclination, time of last ascending node crossing, right ascension of ascending node, and mean motion. Running parallel to the onboard software is the TORS Command Builder Tool application, located in the JSC Mission Control Center. The TORS Command Builder Tool is responsible for building the TORS command loads using a ground TORS state vector, mirroring the onboard propagation algorithm, and assessing the fidelity of current TORS command loads onboard ISS. The tool works by extracting a ground state vector at a given time from a current TORS ephemeris, and then calculating the corresponding "onboard" TORS state vector at the same time using the current onboard TORS command load. The tool then performs a comparison between these two vectors and displays the relative differences in the command builder tool GUI. If the RSS position difference between these two vectors exceeds the tolerable lim its, a new command load is built using the ground state vector and uplinked to ISS. A command load's lifetime is therefore defined as the time from when a command load is built to the time the RSS position difference exceeds the tolerable limit. From the outset of TORS command load operations (STS-98), command load lifetime was limited to approximately one week due to the simplicity of both the onboard propagation algorithm, and the algorithm used by the command builder tool to generate the invariant orbital elements. It was soon desired to extend command load lifetime in order to minimize potential risk due to frequent ISS commanding. Initial studies indicated that command load lifetime was most sensitive to changes in mean motion. Finding a suitable value for mean motion
Study and Analyses on the Structural Performance of a Balance
NASA Technical Reports Server (NTRS)
Karkehabadi, R.; Rhew, R. D.; Hope, D. J.
2004-01-01
Strain-gauge balances for use in wind tunnels have been designed at Langley Research Center (LaRC) since its inception. Currently Langley has more than 300 balances available for its researchers. A force balance is inherently a critically stressed component due to the requirements of measurement sensitivity. The strain-gauge balances have been used in Langley s wind tunnels for a wide variety of aerodynamic tests, and the designs encompass a large array of sizes, loads, and environmental effects. There are six degrees of freedom that a balance has to measure. The balance s task to measure these six degrees of freedom has introduced challenging work in transducer development technology areas. As the emphasis increases on improving aerodynamic performance of all types of aircraft and spacecraft, the demand for improved balances is at the forefront. Force balance stress analysis and acceptance criteria are under review due to LaRC wind tunnel operational safety requirements. This paper presents some of the analyses and research done at LaRC that influence structural integrity of the balances. The analyses are helpful in understanding the overall behavior of existing balances and can be used in the design of new balances to enhance performance. Initially, a maximum load combination was used for a linear structural analysis. When nonlinear effects were encountered, the analysis was extended to include nonlinearities using MSC.Nastran . Because most of the balances are designed using Pro/Mechanica , it is desirable and efficient to use Pro/Mechanica for stress analysis. However, Pro/Mechanica is limited to linear analysis. Both Pro/Mechanica and MSC.Nastran are used for analyses in the present work. The structural integrity of balances and the possibility of modifying existing balances to enhance structural integrity are investigated.
A balanced view of balanced solutions.
Guidet, Bertrand; Soni, Neil; Della Rocca, Giorgio; Kozek, Sibylle; Vallet, Benoît; Annane, Djillali; James, Mike
2010-01-01
The present review of fluid therapy studies using balanced solutions versus isotonic saline fluids (both crystalloids and colloids) aims to address recent controversy in this topic. The change to the acid-base equilibrium based on fluid selection is described. Key terms such as dilutional-hyperchloraemic acidosis (correctly used instead of dilutional acidosis or hyperchloraemic metabolic acidosis to account for both the Henderson-Hasselbalch and Stewart equations), isotonic saline and balanced solutions are defined. The review concludes that dilutional-hyperchloraemic acidosis is a side effect, mainly observed after the administration of large volumes of isotonic saline as a crystalloid. Its effect is moderate and relatively transient, and is minimised by limiting crystalloid administration through the use of colloids (in any carrier). Convincing evidence for clinically relevant adverse effects of dilutional-hyperchloraemic acidosis on renal function, coagulation, blood loss, the need for transfusion, gastrointestinal function or mortality cannot be found. In view of the long-term use of isotonic saline either as a crystalloid or as a colloid carrier, the paucity of data documenting detrimental effects of dilutional-hyperchloraemic acidosis and the limited published information on the effects of balanced solutions on outcome, we cannot currently recommend changing fluid therapy to the use of a balanced colloid preparation.
Identifying Balance in a Balanced Scorecard System
ERIC Educational Resources Information Center
Aravamudhan, Suhanya; Kamalanabhan, T. J.
2007-01-01
In recent years, strategic management concepts seem to be gaining greater attention from the academicians and the practitioner's alike. Balanced Scorecard (BSC) concept is one such management concepts that has spread in worldwide business and consulting communities. The BSC translates mission and vision statements into a comprehensive set of…
Shared memory, cache, and frontwidth considerations in multifrontal algorithm development
Benner, R.E.
1986-01-23
A concurrent, multifrontal algorithm (Benner and Weigand 1986) for solution of finite element equations was modified to better use the cache and shared memories on the ELXSI 6400, and to achieve better load balancing between 'child' processes via frontwidth reduction. The changes were also tailored to use distributed memory machines efficiently by making most local to individual processors. The test code initially used 8 Mbytes of incached shared memory and 155 cp (concurrent processor) sec (a speedup of 1.4) when run on 4 processors. The changes left only 50 Kbytes of uncached, and 470 Kbytes of cached, shared memory, plus 530 Kbytes of data local to each 'child' process. Total cp time was reduced to 57 sec and speedup increased to 2.8 on 4 processors. Based on those results an addition to the ELXSI multitasking software, asynchronous I/O between processes, is proposed that would further decrease the shared memory requirements of the algorithm and make the ELXSI look like a distributed memory machine as far as algorithm development is concerned. This would make the ELXSI an extremely useful tool for further development of special-purpose, finite element computations. 16 refs., 8 tabs.
Parallelized FVM algorithm for three-dimensional viscoelastic flows
NASA Astrophysics Data System (ADS)
Dou, H.-S.; Phan-Thien, N.
A parallel implementation for the finite volume method (FVM) for three-dimensional (3D) viscoelastic flows is developed on a distributed computing environment through Parallel Virtual Machine (PVM). The numerical procedure is based on the SIMPLEST algorithm using a staggered FVM discretization in Cartesian coordinates. The final discretized algebraic equations are solved with the TDMA method. The parallelisation of the program is implemented by a domain decomposition strategy, with a master/slave style programming paradigm, and a message passing through PVM. A load balancing strategy is proposed to reduce the communications between processors. The three-dimensional viscoelastic flow in a rectangular duct is computed with this program. The modified Phan-Thien-Tanner (MPTT) constitutive model is employed for the equation system closure. Computing results are validated on the secondary flow problem due to non-zero second normal stress difference N2. Three sets of meshes are used, and the effect of domain decomposition strategies on the performance is discussed. It is found that parallel efficiency is strongly dependent on the grid size and the number of processors for a given block number. The convergence rate as well as the total efficiency of domain decomposition depends upon the flow problem and the boundary conditions. The parallel efficiency increases with increasing problem size for given block number. Comparing to two-dimensional flow problems, 3D parallelized algorithm has a lower efficiency owing to largely overlapped block interfaces, but the parallel algorithm is indeed a powerful means for large scale flow simulations.
ERIC Educational Resources Information Center
Blakley, G. R.
1982-01-01
Reviews mathematical techniques for solving systems of homogeneous linear equations and demonstrates that the algebraic method of balancing chemical equations is a matter of solving a system of homogeneous linear equations. FORTRAN programs using this matrix method to chemical equation balancing are available from the author. (JN)
ERIC Educational Resources Information Center
Hines, Thomas E.
2011-01-01
Maintaining balance in leadership can be difficult because balance is affected by the personality, strengths, and attitudes of the leader as well as the complicated environment within and outside the community college itself. This article explores what being a leader at the community college means, what the threats are to effective leadership, and…
ERIC Educational Resources Information Center
La Porta, Rafael; Lopez-de-Silanes, Florencio; Pop-Eleches, Cristian; Shleifer, Andrei
2004-01-01
In the Anglo-American constitutional tradition, judicial checks and balances are often seen as crucial guarantees of freedom. Hayek distinguishes two ways in which the judiciary provides such checks and balances: judicial independence and constitutional review. We create a new database of constitutional rules in 71 countries that reflect these…
ERIC Educational Resources Information Center
Coulson, Eddie K.
2006-01-01
"The Technology Balance Beam" is designed to question the role of technology within school districts. This case study chronicles a typical school district in relation to the school district's implementation of technology beginning in the 1995-1996 school year. The fundamental question that this scenario raises is, What is the balance between…
ERIC Educational Resources Information Center
Mosey, Edward
1991-01-01
The booming economy of the Pacific Northwest region promotes the dilemma of balancing the need for increased electrical power with the desire to maintain that region's unspoiled natural environment. Pertinent factors discussed within the balance equation are population trends, economic considerations, industrial power requirements, and…
Attaway, S.W.; Hendrickson, B.A.; Plimpton, S.J.; Swegle, J.W.; Gardner, D.R.; Vaughan, C.T.
1997-05-01
An efficient, scalable, parallel algorithm for treating contacts in solid mechanics has been applied to interactions between particles in smooth particle hydrodynamics (SPH). The algorithm uses three different decompositions within a single timestep: (1) a static FE-decomposition of mesh elements; (2) a dynamic SPH-decomposition of SPH particles; (3) and a dynamic contact-decomposition of contact nodes and SPH particles. The overhead cost of such a scheme is the cost of moving mesh and particle data between the decompositions. This cost turns out to be small in practice, leading to a highly load-balanced decomposition in which to perform each of the three major computational states within a timestep.
Optimum stacking sequence design of composite sandwich panel using genetic algorithms
NASA Astrophysics Data System (ADS)
Bir, Amarpreet Singh
Composite sandwich structures recently gained preference for various structural components over conventional metals and simple composite laminates in the aerospace industries. For most widely used composite sandwich structures, the optimization problems only requires the determination of the best stacking sequence and the number of laminae with different fiber orientations. Genetic algorithm optimization technique based on Darwin's theory of survival of the fittest and evolution is most suitable for solving such optimization problems. The present research work focuses on the stacking sequence optimization of composite sandwich panels with laminated face-sheets for both critical buckling load maximization and thickness minimization problems, subjected to bi-axial compressive loading. In the previous studies, only balanced and even-numbered simple composite laminate panels have been investigated ignoring the effects of bending-twisting coupling terms. The current work broadens the application of genetic algorithms to more complex composite sandwich panels with balanced, unbalanced, even and odd-numbered face-sheet laminates including the effects of bending-twisting coupling terms.
Development of a six component flexured two shell internal strain gage balance
NASA Astrophysics Data System (ADS)
Mole, P. J.
1993-01-01
The paper describes the development of a new wind tunnel balance designed to meet the load requirements of the new advanced aircraft. Based on the floating frame or two-shell concept, the Flexured Balance incorporates a separate axial element, thus allowing for higher load per unit diameter, reduced primary load interaction, and greater flexibility in load range selection. Described is the design process, fabrication, gaging, calibration results, and performance during tunnel testing of the first prototype balance. Supporting data and accuracies are provided.
Wallace, B.
1991-01-01
This book discusses the radiation effects on Drosophila. It was originally thought that irradiating Drosophila would decrease the average fitness of the population, thereby leading to information about the detrimental effects of mutations. Surprisingly, the fitness of the irradiated population turned out to be higher than that of the control population. The original motivation for the experiment was as a test of genetic load theory. The average fitness of a population is depressed by deleterious alleles held in the population by the balance between mutation and natural selection. The depression is called the genetic load of the population. The load dose not depend on the magnitude of the deleterious effect of alleles, but only on the mutation rate.
Comparison of Building Energy Modeling Programs: Building Loads
Zhu, Dandan; Hong, Tianzhen; Yan, Da; Wang, Chuang
2012-06-01
identify the differences in solution algorithms, modeling assumptions and simplifications. Identifying inputs of each program and their default values or algorithms for load simulation was a critical step. These tend to be overlooked by users, but can lead to large discrepancies in simulation results. As weather data was an important input, weather file formats and weather variables used by each program were summarized. Some common mistakes in the weather data conversion process were discussed. ASHRAE Standard 140-2007 tests were carried out to test the fundamental modeling capabilities of the load calculations of the three BEMPs, where inputs for each test case were strictly defined and specified. The tests indicated that the cooling and heating load results of the three BEMPs fell mostly within the range of spread of results from other programs. Based on ASHRAE 140-2007 test results, the finer differences between DeST and EnergyPlus were further analyzed by designing and conducting additional tests. Potential key influencing factors (such as internal gains, air infiltration, convection coefficients of windows and opaque surfaces) were added one at a time to a simple base case with an analytical solution, to compare their relative impacts on load calculation results. Finally, special tests were designed and conducted aiming to ascertain the potential limitations of each program to perform accurate load calculations. The heat balance module was tested for both single and double zone cases. Furthermore, cooling and heating load calculations were compared between the three programs by varying the heat transfer between adjacent zones, the occupancy of the building, and the air-conditioning schedule.
Balanced Multiwavelets Based Digital Image Watermarking
NASA Astrophysics Data System (ADS)
Zhang, Na; Huang, Hua; Zhou, Quan; Qi, Chun
In this paper, an adaptive blind watermarking algorithm based on balanced multiwavelets transform is proposed. According to the properties of balanced multiwavelets and human vision system, a modified version of the well-established Lewis perceptual model is given. Therefore, the strength of embedded watermark is controlled by the local properties of the host image .The subbands of balanced multiwavelets transformation are similar to each other in the same scale, so the most similar subbands are chosen to embed the watermark by modifying the relation of the two subbands adaptively under the model, the watermark extraction can be performed without original image. Experimental results show that the watermarked images look visually identical to the original ones, and the watermark also successfully survives after image processing operations such as image cropping, scaling, filtering and JPEG compression.
Load Leveling Battery System Costs
1994-10-12
SYSPLAN evaluates capital investment in customer side of the meter load leveling battery systems. Such systems reduce the customer's monthly electrical demand charge by reducing the maximum power load supplied by the utility during the customer's peak demand. System equipment consists of a large array of batteries, a current converter, and balance of plant equipment and facilities required to support the battery and converter system. The system is installed on the customer's side of themore » meter and controlled and operated by the customer. Its economic feasibility depends largely on the customer's load profile. Load shape requirements, utility rate structures, and battery equipment cost and performance data serve as bases for determining whether a load leveling battery system is economically feasible for a particular installation. Life-cycle costs for system hardware include all costs associated with the purchase, installation, and operation of battery, converter, and balance of plant facilities and equipment. The SYSPLAN spreadsheet software is specifically designed to evaluate these costs and the reduced demand charge benefits; it completes a 20 year period life cycle cost analysis based on the battery system description and cost data. A built-in sensitivity analysis routine is also included for key battery cost parameters. The life cycle cost analysis spreadsheet is augmented by a system sizing routine to help users identify load leveling system size requirements for their facilities. The optional XSIZE system sizing spreadsheet which is included can be used to identify a range of battery system sizes that might be economically attractive. XSIZE output consisting of system operating requirements can then be passed by the temporary file SIZE to the main SYSPLAN spreadsheet.« less
NASA Astrophysics Data System (ADS)
Mozdgir, A.; Mahdavi, Iraj; Seyyedi, I.; Shiraqei, M. E.
2011-06-01
An assembly line is a flow-oriented production system where the productive units performing the operations, referred to as stations, are aligned in a serial manner. The assembly line balancing problem arises and has to be solved when an assembly line has to be configured or redesigned. The so-called simple assembly line balancing problem (SALBP), a basic version of the general problem, has attracted attention of researchers and practitioners of operations research for almost half a century. There are four types of objective functions which are considered to this kind of problem. The versions of SALBP may be complemented by a secondary objective which consists of smoothing station loads. Many heuristics have been proposed for the assembly line balancing problem due to its computational complexity and difficulty in identifying an optimal solution and so many heuristic solutions are supposed to solve this problem. In this paper a differential evolution algorithm is developed to minimize workload smoothness index in SALBP-2 and the algorithm parameters are optimized using Taguchi method.
Magnetic suspension and balance systems (MSBSs)
NASA Technical Reports Server (NTRS)
Britcher, Colin P.; Kilgore, Robert A.
1987-01-01
The problems of wind tunnel testing are outlined, with attention given to the problems caused by mechanical support systems, such as support interference, dynamic-testing restrictions, and low productivity. The basic principles of magnetic suspension are highlighted, along with the history of magnetic suspension and balance systems. Roll control, size limitations, high angle of attack, reliability, position sensing, and calibration are discussed among the problems and limitations of the existing magnetic suspension and balance systems. Examples of the existing systems are presented, and design studies for future systems are outlined. Problems specific to large-scale magnetic suspension and balance systems, such as high model loads, requirements for high-power electromagnets, high-capacity power supplies, highly sophisticated control systems and position sensors, and high costs are assessed.
Multisensory integration in balance control.
Bronstein, A M
2016-01-01
This chapter provides an introduction to the topic of multisensory integration in balance control in, both, health and disease. One of the best-studied examples is that of visuo-vestibular interaction, which is the ability of the visual system to enhance or suppress the vestibulo-ocular reflex (VOR suppression). Of clinical relevance, examination of VOR suppression is clinically useful because only central, not peripheral, lesions impair VOR suppression. Visual, somatosensory (proprioceptive), and vestibular inputs interact strongly and continuously in the control of upright balance. Experiments with visual motion stimuli show that the visual system generates visually-evoked postural responses that, at least initially, can override vestibular and proprioceptive signals. This paradigm has been useful for the study of the syndrome of visual vertigo or vision-induced dizziness, which can appear after vestibular disease. These patients typically report dizziness when exposed to optokinetic stimuli or visually charged environments, such as supermarkets. The principles of the rehabilitation treatment of these patients, which use repeated exposure to visual motion, are presented. Finally, we offer a diagnostic algorithm in approaching the patient reporting oscillopsia - the illusion of oscillation of the visual environment, which should not be confused with the syndrome mentioned earlier of visual vertigo. PMID:27638062
Active balance system and vibration balanced machine
NASA Technical Reports Server (NTRS)
Qiu, Songgang (Inventor); Augenblick, John E. (Inventor); Peterson, Allen A. (Inventor); White, Maurice A. (Inventor)
2005-01-01
An active balance system is provided for counterbalancing vibrations of an axially reciprocating machine. The balance system includes a support member, a flexure assembly, a counterbalance mass, and a linear motor or an actuator. The support member is configured for attachment to the machine. The flexure assembly includes at least one flat spring having connections along a central portion and an outer peripheral portion. One of the central portion and the outer peripheral portion is fixedly mounted to the support member. The counterbalance mass is fixedly carried by the flexure assembly along another of the central portion and the outer peripheral portion. The linear motor has one of a stator and a mover fixedly mounted to the support member and another of the stator and the mover fixedly mounted to the counterbalance mass. The linear motor is operative to axially reciprocate the counterbalance mass.
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-09-01
features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. In this report, a new methodology to predict the uncertainty ranges for the required balancing capacity, ramping capability and ramp duration is presented. Uncertainties created by system load forecast errors, wind and solar forecast errors, generation forced outages are taken into account. The uncertainty ranges are evaluated for different confidence levels of having the actual generation requirements within the corresponding limits. The methodology helps to identify system balancing reserve requirement based on a desired system performance levels, identify system “breaking points”, where the generation system becomes unable to follow the generation requirement curve with the user-specified probability level, and determine the time remaining to these potential events. The approach includes three stages: statistical and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence intervals. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis incorporating all sources of uncertainty and parameters of a continuous (wind forecast and load forecast errors) and discrete (forced generator outages and failures to start up) nature. Preliminary simulations using California Independent System Operator (California ISO) real life data have shown the effectiveness of the proposed approach. A tool developed based on the new methodology described in this report will be integrated with the California ISO systems. Contractual work is currently in place to integrate the tool with the AREVA EMS system.
Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks
Zeng, Yali; Xu, Li; Chen, Zhide
2015-01-01
As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration. PMID:26703616
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks.
Zeng, Yali; Xu, Li; Chen, Zhide
2015-01-01
As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration. PMID:26703616
Fault-Tolerant Algorithms for Connectivity Restoration in Wireless Sensor Networks.
Zeng, Yali; Xu, Li; Chen, Zhide
2015-12-22
As wireless sensor network (WSN) is often deployed in a hostile environment, nodes in the networks are prone to large-scale failures, resulting in the network not working normally. In this case, an effective restoration scheme is needed to restore the faulty network timely. Most of existing restoration schemes consider more about the number of deployed nodes or fault tolerance alone, but fail to take into account the fact that network coverage and topology quality are also important to a network. To address this issue, we present two algorithms named Full 2-Connectivity Restoration Algorithm (F2CRA) and Partial 3-Connectivity Restoration Algorithm (P3CRA), which restore a faulty WSN in different aspects. F2CRA constructs the fan-shaped topology structure to reduce the number of deployed nodes, while P3CRA constructs the dual-ring topology structure to improve the fault tolerance of the network. F2CRA is suitable when the restoration cost is given the priority, and P3CRA is suitable when the network quality is considered first. Compared with other algorithms, these two algorithms ensure that the network has stronger fault-tolerant function, larger coverage area and better balanced load after the restoration.
Consideration of Dynamical Balances
NASA Technical Reports Server (NTRS)
Errico, Ronald M.
2015-01-01
The quasi-balance of extra-tropical tropospheric dynamics is a fundamental aspect of nature. If an atmospheric analysis does not reflect such balance sufficiently well, the subsequent forecast will exhibit unrealistic behavior associated with spurious fast-propagating gravity waves. Even if these eventually damp, they can create poor background fields for a subsequent analysis or interact with moist physics to create spurious precipitation. The nature of this problem will be described along with the reasons for atmospheric balance and techniques for mitigating imbalances. Attention will be focused on fundamental issues rather than on recipes for various techniques.
NASA Technical Reports Server (NTRS)
1996-01-01
NeuroCom's Balance Master is a system to assess and then retrain patients with balance and mobility problems and is used in several medical centers. NeuroCom received assistance in research and funding from NASA, and incorporated technology from testing mechanisms for astronauts after shuttle flights. The EquiTest and Balance Master Systems are computerized posturography machines that measure patient responses to movement of a platform on which the subject is standing or sitting, then provide assessments of the patient's postural alignment and stability.
Forbes, G.B.; Lantigua, R.; Amatruda, J.M.; Lockwood, D.H.
1981-01-01
Six overweight adult subjects given a low calorie diet containing adequate amounts of nitrogen but subnormal amounts of potassium (K) were observed on the Clinical Research Center for periods of 29 to 40 days. Metabolic balance of potassium was measured together with frequent assays of total body K by /sup 40/K counting. Metabolic K balance underestimated body K losses by 11 to 87% (average 43%): the intersubject variability is such as to preclude the use of a single correction value for unmeasured losses in K balance studies.
Cook, G.; Brown, H.; Strawn, N.
1996-12-31
Nature seeks a balance. The global carbon cycle, in which carbon is exchanged between the atmosphere, biosphere, and oceans through natural processes such as absorption, photosynthesis, and respiration, is one of those balances. This constant exchange promotes an equilibrium in which atmospheric carbon dioxide is keep relatively steady over long periods of time. For the last 10,000 years, up to the 19th century, the global carbon cycle has maintained atmospheric concentrations of carbon dioxide between 260 and 290 ppm. This article discusses the disturbance of the balance, how ethanol fuels address the carbon dioxide imbalance, and a bioethanol strategy.
The Challenge is to develop ideas for how NASA can turn available entry, descent, and landing balance mass on a future Mars mission into a scientific or technological payload. Proposed concepts sho...
Fowler, Kimberly M.
2008-05-01
This essay is being proposed as part of a book titled: "Motherhood: The Elephant in the Laboratory." It offers professional and personal advice on how to balance working in the research field with a family life.
Posttraumatic balance disorders.
Hoffer, Michael E; Balough, Ben J; Gottshall, Kim R
2007-01-01
Head trauma is being more frequently recognized as a causative agent in balance disorders. Most of the published literature examining traumatic brain injury (TBI) after head trauma has focused on short-term prognostic indicators and neurocognitive disorders. Few data are available to guide those individuals who see patients with balance disorders secondary to TBI. Our group has previously examined balance disorders after mild head trauma. In this study, we study all classes of head trauma. We provide a classification system that is useful in the diagnosis and management of balance disorders after head trauma and we examine treatment outcomes. As dizziness is one of the most common outcomes of TBI, it is essential that those who study and treat dizziness be familiar with this subject. PMID:17691667
NASA Technical Reports Server (NTRS)
1991-01-01
Researchers at the Balance Function Laboratory and Clinic at the Minneapolis (MN) Neuroscience Institute on the Abbot Northwestern Hospital Campus are using a rotational chair (technically a "sinusoidal harmonic acceleration system") originally developed by NASA to investigate vestibular (inner ear) function in weightlessness to diagnose and treat patients with balance function disorders. Manufactured by ICS Medical Corporation, Schaumberg, IL, the chair system turns a patient and monitors his or her responses to rotational stimulation.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Load Control System Reliability
Trudnowski, Daniel
2015-04-03
This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”
NASA Astrophysics Data System (ADS)
Robinson, Ian A.
2014-04-01
The time is fast approaching when the SI unit of mass will cease to be based on a single material artefact and will instead be based upon the defined value of a fundamental constant—the Planck constant—h . This change requires that techniques exist both to determine the appropriate value to be assigned to the constant, and to measure mass in terms of the redefined unit. It is important to ensure that these techniques are accurate and reliable to allow full advantage to be taken of the stability and universality provided by the new definition and to guarantee the continuity of the world's mass measurements, which can affect the measurement of many other quantities such as energy and force. Up to now, efforts to provide the basis for such a redefinition of the kilogram were mainly concerned with resolving the discrepancies between individual implementations of the two principal techniques: the x-ray crystal density (XRCD) method [1] and the watt and joule balance methods which are the subject of this special issue. The first three papers report results from the NRC and NIST watt balance groups and the NIM joule balance group. The result from the NRC (formerly the NPL Mk II) watt balance is the first to be reported with a relative standard uncertainty below 2 × 10-8 and the NIST result has a relative standard uncertainty below 5 × 10-8. Both results are shown in figure 1 along with some previous results; the result from the NIM group is not shown on the plot but has a relative uncertainty of 8.9 × 10-6 and is consistent with all the results shown. The Consultative Committee for Mass and Related Quantities (CCM) in its meeting in 2013 produced a resolution [2] which set out the requirements for the number, type and quality of results intended to support the redefinition of the kilogram and required that there should be agreement between them. These results from NRC, NIST and the IAC may be considered to meet these requirements and are likely to be widely debated
14 CFR 23.427 - Unsymmetrical loads.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Balancing Surfaces § 23.427 Unsymmetrical loads. (a) Horizontal surfaces other than main wing and their..., wings, horizontal surfaces other than main wing, and fuselage shape: (1) 100 percent of the maximum... surfaces other than main wing having appreciable dihedral or supported by the vertical tail surfaces)...
14 CFR 23.427 - Unsymmetrical loads.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Balancing Surfaces § 23.427 Unsymmetrical loads. (a) Horizontal surfaces other than main wing and their..., wings, horizontal surfaces other than main wing, and fuselage shape: (1) 100 percent of the maximum... surfaces other than main wing having appreciable dihedral or supported by the vertical tail surfaces)...
NASA Technical Reports Server (NTRS)
Thompson, Bryan
2000-01-01
This is the final report for a project carried out to modify a manual commercial Cavendish Balance for automated use in cryostat. The scope of this project was to modify an off-the-shelf manually operated Cavendish Balance to allow for automated operation for periods of hours or days in cryostat. The purpose of this modification was to allow the balance to be used in the study of effects of superconducting materials on the local gravitational field strength to determine if the strength of gravitational fields can be reduced. A Cavendish Balance was chosen because it is a fairly simple piece of equipment for measuring gravity, one the least accurately known and least understood physical constants. The principle activities that occurred under this purchase order were: (1) All the components necessary to hold and automate the Cavendish Balance in a cryostat were designed. Engineering drawings were made of custom parts to be fabricated, other off-the-shelf parts were procured; (2) Software was written in LabView to control the automation process via a stepper motor controller and stepper motor, and to collect data from the balance during testing; (3)Software was written to take the data collected from the Cavendish Balance and reduce it to give a value for the gravitational constant; (4) The components of the system were assembled and fitted to a cryostat. Also the LabView hardware including the control computer, stepper motor driver, data collection boards, and necessary cabling were assembled; and (5) The system was operated for a number of periods, data collected, and reduced to give an average value for the gravitational constant.
Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert
2013-01-01
Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
A Root Zone Water Balance Algorithm for Educational Settings.
ERIC Educational Resources Information Center
Cahoon, Joel E.; Ferguson, Richard B.
1995-01-01
Describes a simple technique for monitoring root zone water status on demonstration project fields and incorporating the demonstration site results into workshop-type educational settings. Surveys indicate the presentation was well received by demonstration project cooperators and educators. (LZ)
Luminance and contrast ideal balancing based tone mapping algorithm
NASA Astrophysics Data System (ADS)
Besrour, Amine; Abdelkefi, Fatma; Siala, Mohamed; Snoussi, Hichem
2015-09-01
The tone mapping field represents a challenge for all the HDR researchers. Indeed, this field is very important since, it offers better display terms for the end-user. This paper details a design of a recent tone mapping operator used in high dynamic range imaging systems. The proposed operator represents a local method that uses an adaptable factor which combines both the average neighbouring contrast and the brightness difference. Thanks to that, this solution provides good results with better brightness, contrast, and visibility and without producing neither undesired artifacts nor shadow effects.
An efficient parallel termination detection algorithm
Baker, A. H.; Crivelli, S.; Jessup, E. R.
2004-05-27
Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.
NASA Technical Reports Server (NTRS)
Holliday, Ezekiel S. (Inventor)
2014-01-01
Vibrations of a principal machine are reduced at the fundamental and harmonic frequencies by driving the drive motor of an active balancer with balancing signals at the fundamental and selected harmonics. Vibrations are sensed to provide a signal representing the mechanical vibrations. A balancing signal generator for the fundamental and for each selected harmonic processes the sensed vibration signal with adaptive filter algorithms of adaptive filters for each frequency to generate a balancing signal for each frequency. Reference inputs for each frequency are applied to the adaptive filter algorithms of each balancing signal generator at the frequency assigned to the generator. The harmonic balancing signals for all of the frequencies are summed and applied to drive the drive motor. The harmonic balancing signals drive the drive motor with a drive voltage component in opposition to the vibration at each frequency.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
NASA Astrophysics Data System (ADS)
Slattery, Stuart R.
2016-02-01
In this paper we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. These scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Using Process Load Cell Information for IAEA Safeguards at Enrichment Plants
Laughter, Mark D; Whitaker, J Michael; Howell, John
2010-01-01
Uranium enrichment service providers are expanding existing enrichment plants and constructing new facilities to meet demands resulting from the shutdown of gaseous diffusion plants, the completion of the U.S.-Russia highly enriched uranium downblending program, and the projected global renaissance in nuclear power. The International Atomic Energy Agency (IAEA) conducts verification inspections at safeguarded facilities to provide assurance that signatory States comply with their treaty obligations to use nuclear materials only for peaceful purposes. Continuous, unattended monitoring of load cells in UF{sub 6} feed/withdrawal stations can provide safeguards-relevant process information to make existing safeguards approaches more efficient and effective and enable novel safeguards concepts such as information-driven inspections. The IAEA has indicated that process load cell monitoring will play a central role in future safeguards approaches for large-scale gas centrifuge enrichment plants. This presentation will discuss previous work and future plans related to continuous load cell monitoring, including: (1) algorithms for automated analysis of load cell data, including filtering methods to determine significant weights and eliminate irrelevant impulses; (2) development of metrics for declaration verification and off-normal operation detection ('cylinder counting,' near-real-time mass balancing, F/P/T ratios, etc.); (3) requirements to specify what potentially sensitive data is safeguards relevant, at what point the IAEA gains on-site custody of the data, and what portion of that data can be transmitted off-site; (4) authentication, secure on-site storage, and secure transmission of load cell data; (5) data processing and remote monitoring schemes to control access to sensitive and proprietary information; (6) integration of process load cell data in a layered safeguards approach with cross-check verification; (7) process mock-ups constructed to provide simulated
The cryogenic balance design and balance calibration methods
NASA Astrophysics Data System (ADS)
Ewald, B.; Polanski, L.; Graewe, E.
1992-07-01
The current status of a program aimed at the development of a cryogenic balance for the European Transonic Wind Tunnel is reviewed. In particular, attention is given to the cryogenic balance design philosophy, mechanical balance design, reliability and accuracy, cryogenic balance calibration concept, and the concept of an automatic calibration machine. It is shown that the use of the automatic calibration machine will improve the accuracy of calibration while reducing the man power and time required for balance calibration.
NASA Technical Reports Server (NTRS)
Malcolm, G. N.
1981-01-01
Two wind tunnel techniques for determining part of the aerodynamic information required to describe the dynamic bahavior of various types of vehicles in flight are described. Force and moment measurements are determined with a rotary-balance apparatus in a coning motion and with a Magnus balance in a high-speed spinning motion. Coning motion is pertinent to both aircraft and missiles, and spinning is important for spin stabilized missiles. Basic principles of both techniques are described, and specific examples of each type of apparatus are presented. Typical experimental results are also discussed.
Stochastic solution of population balance equations for reactor networks
Menz, William J.; Akroyd, Jethro; Kraft, Markus
2014-01-01
This work presents a sequential modular approach to solve a generic network of reactors with a population balance model using a stochastic numerical method. Full-coupling to the gas-phase is achieved through operator-splitting. The convergence of the stochastic particle algorithm in test networks is evaluated as a function of network size, recycle fraction and numerical parameters. These test cases are used to identify methods through which systematic and statistical error may be reduced, including by use of stochastic weighted algorithms. The optimal algorithm was subsequently used to solve a one-dimensional example of silicon nanoparticle synthesis using a multivariate particle model. This example demonstrated the power of stochastic methods in resolving particle structure by investigating the transient and spatial evolution of primary polydispersity, degree of sintering and TEM-style images. Highlights: •An algorithm is presented to solve reactor networks with a population balance model. •A stochastic method is used to solve the population balance equations. •The convergence and efficiency of the reported algorithms are evaluated. •The algorithm is applied to simulate silicon nanoparticle synthesis in a 1D reactor. •Particle structure is reported as a function of reactor length and time.
Offshore tanker loading system
Baan, J. de; Heijst, W.J. van.
1994-01-04
The present invention relates to an improved flexible loading system which provides fluid communication between a subsea pipeline and a surface vessel including a hose extending from the subsea pipeline to a first buoyancy tank, a second hose extending from the first buoyancy tank to a central buoyancy tank, a second buoyancy tank, means connecting said second buoyancy tank to the sea floor and to the central buoyancy tank whereby the forces exerted on said central buoyant tank by said second hose and said connecting means are balanced to cause said central buoyancy tank to maintain a preselected position, a riser section extending upwardly from said central buoyancy tank and means on the upper termination for engagement by a vessel on the surface to raise said upper termination onto the vessel to complete the communication for moving fluids between the subsea pipeline and the vessel. In one form the means for connecting between the sea floor to the second buoyancy tank includes an anchor on the sea floor and lines extending from the anchor to the second buoyancy tank and from the second buoyancy tank to the central buoyancy tank. In another form of the invention the means for connecting is a third hose extending from a second subsea pipeline to the second buoyancy tank and a fourth hose extending from the second buoyancy tank to the central buoyancy tank. The central buoyancy tank is preferred to be maintained at a level below the water surface which allows full movement of the vessel while connected to the riser section. A swivel may be positioned in the riser section and a pressure relief system may be included in the loading system to protect it from sudden excess pressures. 17 figs.
The Balanced Billing Cycle Vehicle Routing Problem
Groer, Christopher S; Golden, Bruce; Edward, Wasil
2009-01-01
Utility companies typically send their meter readers out each day of the billing cycle in order to determine each customer s usage for the period. Customer churn requires the utility company to periodically remove some customer locations from its meter-reading routes. On the other hand, the addition of new customers and locations requires the utility company to add newstops to the existing routes. A utility that does not adjust its meter-reading routes over time can find itself with inefficient routes and, subsequently, higher meter-reading costs. Furthermore, the utility can end up with certain billing days that require substantially larger meter-reading resources than others. However, remedying this problem is not as simple as it may initially seem. Certain regulatory and customer service considerations can prevent the utility from shifting a customer s billing day by more than a few days in either direction. Thus, the problem of reducing the meterreading costs and balancing the workload can become quite difficult. We describe this Balanced Billing Cycle Vehicle Routing Problem in more detail and develop an algorithm for providing solutions to a slightly simplified version of the problem. Our algorithm uses a combination of heuristics and integer programming via a three-stage algorithm. We discuss the performance of our procedure on a real-world data set.
NASA LaRC Strain Gage Balance Design Concepts
NASA Technical Reports Server (NTRS)
Rhew, Ray D.
1999-01-01
The NASA Langley Research Center (LaRC) has been designing strain-gage balances for more than fifty years. These balances have been utilized in Langley's wind tunnels, which span over a wide variety of aerodynamic test regimes, as well as other ground based test facilities and in space flight applications. As a result, the designs encompass a large array of sizes, loads, and environmental effects. Currently Langley has more than 300 balances available for its researchers. This paper will focus on the design concepts for internal sting mounted strain-gage balances. However, these techniques can be applied to all force measurement design applications. Strain-gage balance concepts that have been developed over the years including material selection, sting, model interfaces, measuring, sections, fabrication, strain-gaging and calibration will be discussed.
Calibration Variable Selection and Natural Zero Determination for Semispan and Canard Balances
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
Independent calibration variables for the characterization of semispan and canard wind tunnel balances are discussed. It is shown that the variable selection for a semispan balance is determined by the location of the resultant normal and axial forces that act on the balance. These two forces are the first and second calibration variable. The pitching moment becomes the third calibration variable after the normal and axial forces are shifted to the pitch axis of the balance. Two geometric distances, i.e., the rolling and yawing moment arms, are the fourth and fifth calibration variable. They are traditionally substituted by corresponding moments to simplify the use of calibration data during a wind tunnel test. A canard balance is related to a semispan balance. It also only measures loads on one half of a lifting surface. However, the axial force and yawing moment are of no interest to users of a canard balance. Therefore, its calibration variable set is reduced to the normal force, pitching moment, and rolling moment. The combined load diagrams of the rolling and yawing moment for a semispan balance are discussed. They may be used to illustrate connections between the wind tunnel model geometry, the test section size, and the calibration load schedule. Then, methods are reviewed that may be used to obtain the natural zeros of a semispan or canard balance. In addition, characteristics of three semispan balance calibration rigs are discussed. Finally, basic requirements for a full characterization of a semispan balance are reviewed.
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor)
2003-01-01
A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.
Development of the NTF-117S Semi-Span Balance
NASA Technical Reports Server (NTRS)
Lynn, Keith C.
2010-01-01
A new high-capacity semi-span force and moment balance has recently been developed for use at the National Transonic Facility at the NASA Langley Research Center. This new semi-span balance provides the NTF a new measurement capability that will support testing of semi-span test models at transonic high-lift testing regimes. Future testing utilizing this new balance capability will include active circulation control and propulsion simulation testing of semi-span transonic wing models. The NTF has recently implemented a new highpressure air delivery station that will provide both high and low mass flow pressure lines that are routed out to the semi-span models via a set high/low pressure bellows that are indirectly linked to the metric end of the NTF-117S balance. A new check-load stand is currently being developed to provide the NTF with an in-house capability that will allow for performing check-loads on the NTF-117S balance in order to determine the pressure tare affects on the overall performance of the balance. An experimental design is being developed that will allow for experimentally assessing the static pressure tare affects on the balance performance.
Para-GMRF: parallel algorithm for anomaly detection of hyperspectral image
NASA Astrophysics Data System (ADS)
Dong, Chao; Zhao, Huijie; Li, Na; Wang, Wei
2007-12-01
The hyperspectral imager is capable of collecting hundreds of images corresponding to different wavelength channels for the observed area simultaneously, which make it possible to discriminate man-made objects from natural background. However, the price paid for the wealthy information is the enormous amounts of data, usually hundreds of Gigabytes per day. Turning the huge volume data into useful information and knowledge in real time is critical for geoscientists. In this paper, the proposed parallel Gaussian-Markov random field (Para-GMRF) anomaly detection algorithm is an attempt of applying parallel computing technology to solve the problem. Based on the locality of GMRF algorithm, we partition the 3-D hyperspectral image cube in spatial domain and distribute data blocks to multiple computers for concurrent detection. Meanwhile, to achieve load balance, a work pool scheduler is designed for task assignment. The Para-GMRF algorithm is organized in master-slave architecture, coded in C programming language using message passing interface (MPI) library and tested on a Beowulf cluster. Experimental results show that Para-GMRF algorithm successfully conquers the challenge and can be used in time sensitive areas, such as environmental monitoring and battlefield reconnaissance.
Maintaining an Environmental Balance
ERIC Educational Resources Information Center
Environmental Science and Technology, 1976
1976-01-01
A recent conference of the National Environmental Development Association focused on the concepts of environment, energy and economy and underscored the necessity for balancing the critical needs embodied in these issues. Topics discussed included: nuclear energy and wastes, water pollution control, federal regulations, environmental technology…
ERIC Educational Resources Information Center
Yahnke, Sally; And Others
The purpose of this monograph is to present a series of activities designed to teach strategies needed for effectively managing the multiple responsibilities of family and work. The guide contains 11 lesson plans dealing with balancing family and work that can be used in any home economics class, from middle school through college. The lesson…
ERIC Educational Resources Information Center
Lewis, Tamika; Mobley, Mary; Huttenlock, Daniel
2013-01-01
It's the season for the job hunt, whether one is looking for their first job or taking the next step along their career path. This article presents first-person accounts to see how teachers balance the rewards and challenges of working in different types of schools. Tamica Lewis, a third-grade teacher, states that faculty at her school is…
ERIC Educational Resources Information Center
Gordon, Milton A.; Gordon, Margaret F.
1996-01-01
New college presidents are inundated with requests for their time, and their private life is often sacrificed. Each administrator must decide what is the appropriate balance among various aspects of his/her position. Physical separation of public and private lives is essential, and the role of the spouse, who may have other professional…
Single-Vector Calibration of Wind-Tunnel Force Balances
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2003-01-01
An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is
Power system very short-term load prediction
Trudnowski, D.J.; Johnson, J.M.; Whitney, P.
1997-02-01
A fundamental objective of a power-system operating and control scheme is to maintain a match between the system`s overall real-power load and generation. To accurately maintain this match, modern energy management systems require estimates of the future total system load. Several strategies and tools are available for estimating system load. Nearly all of these estimate the future load in 1-hour steps over several hours (or time frames very close to this). While hourly load estimates are very useful for many operation and control decisions, more accurate estimates at closer intervals would also be valuable. This is especially true for emerging Area Generation Control (AGC) strategies such as look-ahead AGC. For these short-term estimation applications, future load estimates out to several minutes at intervals of 1 to 5 minutes are required. The currently emerging operation and control strategies being developed by the BPA are dependent on accurate very short-term load estimates. To meet this need, the BPA commissioned the Pacific Northwest National Laboratory (PNNL) and Montana Tech (an affiliate of the University of Montana) to develop an accurate load prediction algorithm and computer codes that automatically update and can reliably perform in a closed-loop controller for the BPA system. The requirements include accurate load estimation in 5-minute steps out to 2 hours. This report presents the results of this effort and includes: a methodology and algorithms for short-term load prediction that incorporates information from a general hourly forecaster; specific algorithm parameters for implementing the predictor in the BPA system; performance and sensitivity studies of the algorithms on BPA-supplied data; an algorithm for filtering power system load samples as a precursor to inputting into the predictor; and FORTRAN 77 subroutines for implementing the algorithms.
On the relationship between wind profiles and the STS ascent structural loads
NASA Technical Reports Server (NTRS)
Smith, Orvel E.; Adelfang, Stanley I.; Whitehead, Douglas S.
1989-01-01
The response of STS ascent structural load indicators to the wind profile is analyzed. The load indicator values versus Mach numbers are calculated with algorithms using trajectory information. The ascent load minimum margin concept is used to show that the detailed wind profile structure measured by the Jimsphere wind system is not needed to assess the STS rigid body structural wind loads.
NASA Astrophysics Data System (ADS)
Langer, Nitin; Bhat, Abdul Hamid; Agarwal, Pramod
2014-01-01
This paper presents a modulation strategy for self-balancing of capacitor voltages of three-phase neutral-point clamped bi-directional rectifier (without feedback controller and sensors). It is identified that regions within a sector are divided into two categories: (a) One small vector among three selected vectors and (b) Two small vectors among three selected vectors. For category (a) positive and negative commutation state of small vector is implemented for equal duty cycle but for category (b) positive and negative commutation state of small vectors is implemented for unequal duty cycle. Based on this observation, an innovative idea is executed to remove these discrepancies. The innovative optimized space vector switching sequences negative and positive commutation state of both the small vectors are implemented for equal duty cycle during each sampling period resulting in self-balancing of DC-bus capacitors with much reduced ripples under steady-state and dynamic load conditions for both rectification and inversion mode of operation. The converter exhibits excellent performance in terms of other critical parameters like unity input power factor, low input current THD, minimum possible switching losses, reduced-rippled and well-regulated DC voltage. The proposed control algorithm is tested through exhaustive simulation of converter using MATLAB Simulink software.
Pedometer and Human Energy Balance Applications for Science Instruction
ERIC Educational Resources Information Center
Rye, James A.; Smolski, Stefan
2007-01-01
Teachers can use pedometers to facilitate inquiry learning and show students the need for mathematics in scientific investigation. The authors conducted activities with secondary students that investigated intake and expenditure components of the energy balance algorithm, which led to inquiries about pedometers and related data. By investigating…
Ultra-fast fluence optimization for beam angle selection algorithms
NASA Astrophysics Data System (ADS)
Bangert, M.; Ziegenhein, P.; Oelfke, U.
2014-03-01
Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.
Image segmentation using an improved differential algorithm
NASA Astrophysics Data System (ADS)
Gao, Hao; Shi, Yujiao; Wu, Dongmei
2014-10-01
Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.
Kral, Ulrich; Lin, Chih-Yi; Kellner, Katharina; Ma, Hwong-wen; Brunner, Paul H
2014-01-01
Material management faces a dual challenge: on the one hand satisfying large and increasing demands for goods and on the other hand accommodating wastes and emissions in sinks. Hence, the characterization of material flows and stocks is relevant for both improving resource efficiency and environmental protection. This article focuses on the urban scale, a dimension rarely investigated in past metal flow studies. We compare the copper (Cu) metabolism of two cities in different economic states, namely, Vienna (Europe) and Taipei (Asia). Substance flow analysis is used to calculate urban Cu balances in a comprehensive and transparent form. The main difference between Cu in the two cities appears to be the stock: Vienna seems close to saturation with 180 kilograms per capita (kg/cap) and a growth rate of 2% per year. In contrast, the Taipei stock of 30 kg/cap grows rapidly by 26% per year. Even though most Cu is recycled in both cities, bottom ash from municipal solid waste incineration represents an unused Cu potential accounting for 1% to 5% of annual demand. Nonpoint emissions are predominant; up to 50% of the loadings into the sewer system are from nonpoint sources. The results of this research are instrumental for the design of the Cu metabolism in each city. The outcomes serve as a base for identification and recovery of recyclables as well as for directing nonrecyclables to appropriate sinks, avoiding sensitive environmental pathways. The methodology applied is well suited for city benchmarking if sufficient data are available. PMID:25866460
A force transducer from a junk electronic balance
NASA Astrophysics Data System (ADS)
Munguía Aguilar, Horacio; Armenta Aguilar, Francisco
2009-11-01
It is shown how the load cell from a junk electronic balance can be used as a force transducer for physics experiments. Recovering this device is not only an inexpensive way of getting a valuable laboratory tool but also very useful didactic work on electronic instrumentation. Some experiments on mechanics with this transducer are possible after a careful calibration and proper conditioning.
A Force Transducer from a Junk Electronic Balance
ERIC Educational Resources Information Center
Aguilar, Horacio Munguia; Aguilar, Francisco Armenta
2009-01-01
It is shown how the load cell from a junk electronic balance can be used as a force transducer for physics experiments. Recovering this device is not only an inexpensive way of getting a valuable laboratory tool but also very useful didactic work on electronic instrumentation. Some experiments on mechanics with this transducer are possible after a…
Automated Loads Analysis System (ATLAS)
NASA Technical Reports Server (NTRS)
Gardner, Stephen; Frere, Scot; O’Reilly, Patrick
2013-01-01
ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.
Balancing innovation and evidence.
Pilcher, Jobeth W
2015-01-01
Nurse educators are encouraged to use evidence to guide their teaching strategies. However, evidence is not always available. How can educators make decisions regarding strategies when data are limited or absent? Where do innovation and creativity fit? How can innovation be balanced with evidence? This article provides a discussion regarding other sources of evidence, such as extrapolations, theories and principles, and collective expertise. Readers are encouraged to review the options and then analyze how they might be applied to innovation in education.
Heat Load Estimator for Smoothing Pulsed Heat Loads on Supercritical Helium Loops
NASA Astrophysics Data System (ADS)
Hoa, C.; Lagier, B.; Rousset, B.; Bonnay, P.; Michel, F.
Superconducting magnets for fusion are subjected to large variations of heat loads due to cycling operation of tokamaks. The cryogenic system shall operate smoothly to extract the pulsed heat loads by circulating supercritical helium into the coils and structures. However the value of the total heat loads and its temporal variation are not known before the plasma scenario starts. A real-time heat load estimator is of interest for the process control of the cryogenic system in order to anticipate the arrival of pulsed heat loads to the refrigerator and finally to optimize the operation of the cryogenic system. The large variation of the thermal loads affects the physical parameters of the supercritical helium loop (pressure, temperature, mass flow) so those signals can be used for calculating instantaneously the loads deposited into the loop. The methodology and algorithm are addressed in the article for estimating the heat load deposition before it reaches the refrigerator. The CEA patented process control has been implemented in a Programmable Logic Controller (PLC) and has been successfully validated on the HELIOS test facility at CEA Grenoble. This heat load estimator is complementary to pulsed load smoothing strategies providing an estimation of the optimized refrigeration power. It can also effectively improve the process control during the transient between different operating modes by adjusting the refrigeration power to the need. This way, the heat load estimator participates to the safe operation of the cryogenic system.
Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement
Wang, Feiyi; Oral, H Sarp; Vazhkudai, Sudharshan S
2014-01-01
With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.
NASA Technical Reports Server (NTRS)
Johnson, Steven D.; Byers, Jerry W.; Martin, James A.
2012-01-01
A method has been developed for continuous cell voltage balancing for rechargeable batteries (e.g. lithium ion batteries). A resistor divider chain is provided that generates a set of voltages representing the ideal cell voltage (the voltage of each cell should be as if the cells were perfectly balanced). An operational amplifier circuit with an added current buffer stage generates the ideal voltage with a very high degree of accuracy, using the concept of negative feedback. The ideal voltages are each connected to the corresponding cell through a current- limiting resistance. Over time, having the cell connected to the ideal voltage provides a balancing current that moves the cell voltage very close to that ideal level. In effect, it adjusts the current of each cell during charging, discharging, and standby periods to force the cell voltages to be equal to the ideal voltages generated by the resistor divider. The device also includes solid-state switches that disconnect the circuit from the battery so that it will not discharge the battery during storage. This solution requires relatively few parts and is, therefore, of lower cost and of increased reliability due to the fewer failure modes. Additionally, this design uses very little power. A preliminary model predicts a power usage of 0.18 W for an 8-cell battery. This approach is applicable to a wide range of battery capacities and voltages.
Ross, C.P.; Beale, P.L.
1994-01-01
The ability to successfully predict lithology and fluid content from reflection seismic records using AVO techniques is contingent upon accurate pre-analysis conditioning of the seismic data. However, all too often, residual amplitude effects remain after the many offset-dependent processing steps are completed. Residual amplitude effects often represent a significant error when compared to the amplitude variation with offset (AVO) response that the authors are attempting to quantify. They propose a model-based, offset-dependent amplitude balancing method that attempts to correct for these residuals and other errors due to sub-optimal processing. Seismic offset balancing attempts to quantify the relationship between the offset response of back-ground seismic reflections and corresponding theoretical predictions for average lithologic interfaces thought to cause these background reflections. It is assumed that any deviation from the theoretical response is a result of residual processing phenomenon and/or suboptimal processing, and a simple offset-dependent scaling function is designed to correct for these differences. This function can then be applied to seismic data over both prospective and nonprospective zones within an area where the theoretical values are appropriate and the seismic characteristics are consistent. A conservative application of the above procedure results in an AVO response over both gas sands and wet sands that is much closer to theoretically expected values. A case history from the Gulf of Mexico Flexure Trend is presented as an example to demonstrate the offset balancing technique.
Masdeu, Joseph C
2016-01-01
This chapter focuses on one of the most common types of neurologic disorders: altered walking. Walking impairment often reflects disease of the neurologic structures mediating gait, balance or, most often, both. These structures are distributed along the neuraxis. For this reason, this chapter is introduced by a brief description of the neurobiologic underpinning of walking, stressing information that is critical for imaging, namely, the anatomic representation of gait and balance mechanisms. This background is essential not only in order to direct the relevant imaging tools to the regions more likely to be affected but also to interpret correctly imaging findings that may not be related to the walking deficit object of clinical study. The chapter closes with a discussion on how to image some of the most frequent etiologies causing gait or balance impairment. However, it focuses on syndromes not already discussed in other chapters of this volume, such as Parkinson's disease and other movement disorders, already discussed in Chapter 48, or cerebellar ataxia, in Chapter 23, in the previous volume. As regards vascular disease, the spastic hemiplegia most characteristic of brain disease needs little discussion, while the less well-understood effects of microvascular disease are extensively reviewed here, together with the imaging approach. PMID:27430451
Experimental performance evaluation of human balance control models.
Huryn, Thomas P; Blouin, Jean-Sébastien; Croft, Elizabeth A; Koehle, Michael S; Van der Loos, H F Machiel
2014-11-01
Two factors commonly differentiate proposed balance control models for quiet human standing: 1) intermittent muscle activation and 2) prediction that overcomes sensorimotor time delays. In this experiment we assessed the viability and performance of intermittent activation and prediction in a balance control loop that included the neuromuscular dynamics of human calf muscles. Muscles were driven by functional electrical stimulation (FES). The performance of the different controllers was compared based on sway patterns and mechanical effort required to balance a human body load on a robotic balance simulator. All evaluated controllers balanced subjects with and without a neural block applied to their common peroneal and tibial nerves, showing that the models can produce stable balance in the absence of natural activation. Intermittent activation required less stimulation energy than continuous control but predisposed the system to increased sway. Relative to intermittent control, continuous control reproduced the sway size of natural standing better. Prediction was not necessary for stable balance control but did improve stability when control was intermittent, suggesting a possible benefit of a predictor for intermittent activation. Further application of intermittent activation and predictive control models may drive prolonged, stable FES-controlled standing that improves quality of life for people with balance impairments. PMID:24771586
Carson, N.J. Jr.; Ostrander, H.W.; Munter, C.N.
1964-03-01
A weighing device having a load-supporting vertical shaft buoyed up by mutually repellant magnets is described. The shaft is aligned by an air bearing and has an air gage to sense vertical displacement caused by weights placed on the top end of the shaft. (AEC)
Wind Tunnel Force Balance Calibration Study - Interim Results
NASA Technical Reports Server (NTRS)
Rhew, Ray D.
2012-01-01
Wind tunnel force balance calibration is preformed utilizing a variety of different methods and does not have a direct traceable standard such as standards used for most calibration practices (weights, and voltmeters). These different calibration methods and practices include, but are not limited to, the loading schedule, the load application hardware, manual and automatic systems, re-leveling and non-re-leveling. A study of the balance calibration techniques used by NASA was undertaken to develop metrics for reviewing and comparing results using sample calibrations. The study also includes balances of different designs, single and multi-piece. The calibration systems include, the manual, and the automatic that are provided by NASA and its vendors. The results to date will be presented along with the techniques for comparing the results. In addition, future planned calibrations and investigations based on the results will be provided.
Balanced Sparse Model for Tight Frames in Compressed Sensing Magnetic Resonance Imaging
Liu, Yunsong; Cai, Jian-Feng; Zhan, Zhifang; Guo, Di; Ye, Jing; Chen, Zhong; Qu, Xiaobo
2015-01-01
Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B) converges faster than previously proposed algorithms accelerated proximal algorithm (APG) and alternating directional method of multipliers for balanced model (ADMM-B). PMID:25849209
Novel biomedical tetrahedral mesh methods: algorithms and applications
NASA Astrophysics Data System (ADS)
Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu
2007-12-01
Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.
Selection of hydronic balancing valves
Ahlgren, R.C.E.
1998-10-01
This paper describes the selection and setting of balance valves, which, when properly applied in the design of a hydronic system, will result in a balanced system, thus preventing over pumping without excessive energy costs.
Rotating Balances Used for Fluid Pump Testing
NASA Technical Reports Server (NTRS)
Skelley, Stephen; Mulder, Andrew
2014-01-01
Marshall Space Flight Center has developed and demonstrated two direct read force and moment balances for sensing and resolving the hydrodynamic loads on rotating fluid machinery. These rotating balances consist of a series of stainless steel flexures instrumented with semiconductor type, unidirectional strain gauges arranged into six bridges, then sealed and waterproofed, for use fully submerged in degassed water at rotational speeds up to six thousand revolutions per minute. The balances are used to measure the forces and moments due to the onset and presence of cavitation or other hydrodynamic phenomena on subscale replicas of rocket engine turbomachinery, principally axial pumps (inducers) designed specifically to operate in a cavitating environment. The balances are inserted into the drive assembly with power to and signal from the sensors routed through the drive shaft and out through an air-cooled twenty-channel slip ring. High frequency data - balance forces and moments as well as extensive, flush-mounted pressures around the rotating component periphery - are acquired via a high-speed analog to digital data acquisition system while the test rig conditions are varied continuously. The data acquisition and correction process is described, including the in-situ verifications that are performed to quantify and correct for known system effects such as mechanical imbalance, "added mass," buoyancy, mechanical resonance, and electrical bias. Examples of four types of cavitation oscillations for two typical inducers are described in the laboratory (pressure) and rotating (force) frames: 1) attached, symmetric cavitation, 2) rotating cavitation, 3) attached, asymmetric cavitation, and 4) cavitation surge. Rotating and asymmetric cavitation generate a corresponding unbalanced radial force on the rotating assembly while cavitation surge generates an axial force. Attached, symmetric cavitation induces no measurable force. The frequency of the forces can be determined a
Selenium mass balance in the Great Salt Lake, Utah
Diaz, X.; Johnson, W.P.; Naftz, D.L.
2009-01-01
A mass balance for Se in the south arm of the Great Salt Lake was developed for September 2006 to August 2007 of monitoring for Se loads and removal flows. The combined removal flows (sedimentation and volatilization) totaled to a geometric mean value of 2079??kg Se/yr, with the estimated low value being 1255??kg Se/yr, and an estimated high value of 3143??kg Se/yr at the 68% confidence level. The total (particulates + dissolved) loads (via runoff) were about 1560??kg Se/yr, for which the error is expected to be ?? 15% for the measured loads. Comparison of volatilization to sedimentation flux demonstrates that volatilization rather than sedimentation is likely the major mechanism of selenium removal from the Great Salt Lake. The measured loss flows balance (within the range of uncertainties), and possibly surpass, the measured annual loads. Concentration histories were modeled using a simple mass balance, which indicated that no significant change in Se concentration was expected during the period of study. Surprisingly, the measured total Se concentration increased during the period of the study, indicating that the removal processes operate at their low estimated rates, and/or there are unmeasured selenium loads entering the lake. The selenium concentration trajectories were compared to those of other trace metals to assess the significance of selenium concentration trends. ?? 2008 Elsevier B.V.
Assessment of postural balance function.
Kostiukow, Anna; Rostkowska, Elzbieta; Samborski, Włodzimierz
2009-01-01
Postural balance is defined as the ability to stand unassisted without falling. Examination of the patient's postural balance function is a difficult diagnostic task. Most of the balance tests used in medicine provide incomplete information on this coordination ability of the human body. The aim of this study was to review methods of assessment of the patient's postural balance function, including various tests used in medical diagnostics centers. PMID:20698188
Assessment of postural balance function.
Kostiukow, Anna; Rostkowska, Elzbieta; Samborski, Włodzimierz
2009-01-01
Postural balance is defined as the ability to stand unassisted without falling. Examination of the patient's postural balance function is a difficult diagnostic task. Most of the balance tests used in medicine provide incomplete information on this coordination ability of the human body. The aim of this study was to review methods of assessment of the patient's postural balance function, including various tests used in medical diagnostics centers.
More on Chemical Reaction Balancing.
ERIC Educational Resources Information Center
Swinehart, D. F.
1985-01-01
A previous article stated that only the matrix method was powerful enough to balance a particular chemical equation. Shows how this equation can be balanced without using the matrix method. The approach taken involves writing partial mathematical reactions and redox half-reactions, and combining them to yield the final balanced reaction. (JN)
A MASS BALANCE OF SURFACE WATER GENOTOXICITY IN PROVIDENCE RIVER (RHODE ISLAND USA)
White and Rasmussen (Mutation Res. 410:223-236) used a mass balance approach to demonstrate that over 85% of the total genotoxic loading to the St. Lawrence River at Montreal is non-industrial. To validate the mass balance approach and investigate the sources of genotoxins in sur...
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
NASA Astrophysics Data System (ADS)
Chapanova, V.
2012-04-01
Lesson "Balance in Nature" This simulation game-lesson (Balance in Nature) gives an opportunity for the students to show creativity, work independently, and to create models and ideas. It creates future-oriented thought connected to their experience, allowing them to propose solutions for global problems and personal responsibility for their activities. The class is divided in two teams. Each team chooses questions. 1. Question: Pollution in the environment. 2. Question: Care for nature and climate. The teams work on the chosen tasks. They make drafts, notes and formulate their solutions on small pieces of paper, explaining the impact on nature and society. They express their points of view using many different opinions. This generates alternative thoughts and results in creative solutions. With the new knowledge and positive behaviour defined, everybody realizes that they can do something positive towards nature and climate problems and the importance of individuals for solving global problems is evident. Our main goal is to recover the ecological balance, and everybody explains his or her own well-grounded opinions. In this work process the students obtain knowledge, skills and more responsible behaviour. This process, based on his or her own experience, dialogue and teamwork, helps the participant's self-development. Making the model "human↔ nature" expresses how human activities impact the natural Earth and how these impacts in turn affect society. Taking personal responsibility, we can reduce global warming and help the Earth. By helping nature we help ourselves. Teacher: Veselina Boycheva-Chapanova " Saint Patriarch Evtimii" Scholl Str. "Ivan Vazov"-19 Plovdiv Bulgaria
Balancing Computation and Experiment
Farber, Rob
2007-04-01
How do you know when the science being performed on a supercomputer reflects what actually occurs inside a test tube, or in real life? The answer can be stated simply enough: “run the model on the computer, get the result, and then perform an experiment to test the result”. The adage “easier said than done” truly applies – especially when the focus is on innovative science to benefit government and industry. The trick in this case is to find the right balance of science-driven computing integrated with experiment.
Micromechanical Oscillating Mass Balance
NASA Technical Reports Server (NTRS)
Altemir, David A. (Inventor)
1997-01-01
A micromechanical oscillating mass balance and method adapted for measuring minute quantities of material deposited at a selected location, such as during a vapor deposition process. The invention comprises a vibratory composite beam which includes a dielectric layer sandwiched between two conductive layers. The beam is positioned in a magnetic field. An alternating current passes through one conductive layers, the beam oscillates, inducing an output current in the second conductive layer, which is analyzed to determine the resonant frequency of the beam. As material is deposited on the beam, the mass of the beam increases and the resonant frequency of the beam shifts, and the mass added is determined.
NASA Astrophysics Data System (ADS)
Robinson, Ian A.
2014-04-01
The time is fast approaching when the SI unit of mass will cease to be based on a single material artefact and will instead be based upon the defined value of a fundamental constant—the Planck constant—h . This change requires that techniques exist both to determine the appropriate value to be assigned to the constant, and to measure mass in terms of the redefined unit. It is important to ensure that these techniques are accurate and reliable to allow full advantage to be taken of the stability and universality provided by the new definition and to guarantee the continuity of the world's mass measurements, which can affect the measurement of many other quantities such as energy and force. Up to now, efforts to provide the basis for such a redefinition of the kilogram were mainly concerned with resolving the discrepancies between individual implementations of the two principal techniques: the x-ray crystal density (XRCD) method [1] and the watt and joule balance methods which are the subject of this special issue. The first three papers report results from the NRC and NIST watt balance groups and the NIM joule balance group. The result from the NRC (formerly the NPL Mk II) watt balance is the first to be reported with a relative standard uncertainty below 2 × 10-8 and the NIST result has a relative standard uncertainty below 5 × 10-8. Both results are shown in figure 1 along with some previous results; the result from the NIM group is not shown on the plot but has a relative uncertainty of 8.9 × 10-6 and is consistent with all the results shown. The Consultative Committee for Mass and Related Quantities (CCM) in its meeting in 2013 produced a resolution [2] which set out the requirements for the number, type and quality of results intended to support the redefinition of the kilogram and required that there should be agreement between them. These results from NRC, NIST and the IAC may be considered to meet these requirements and are likely to be widely debated
A constraint consensus memetic algorithm for solving constrained optimization problems
NASA Astrophysics Data System (ADS)
Hamza, Noha M.; Sarker, Ruhul A.; Essam, Daryl L.; Deb, Kalyanmoy; Elsayed, Saber M.
2014-11-01
Constraint handling is an important aspect of evolutionary constrained optimization. Currently, the mechanism used for constraint handling with evolutionary algorithms mainly assists the selection process, but not the actual search process. In this article, first a genetic algorithm is combined with a class of search methods, known as constraint consensus methods, that assist infeasible individuals to move towards the feasible region. This approach is also integrated with a memetic algorithm. The proposed algorithm is tested and analysed by solving two sets of standard benchmark problems, and the results are compared with other state-of-the-art algorithms. The comparisons show that the proposed algorithm outperforms other similar algorithms. The algorithm has also been applied to solve a practical economic load dispatch problem, where it also shows superior performance over other algorithms.
Design and implementation of parallel multigrid algorithms
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tuminaro, Ray S.
1988-01-01
Techniques for mapping multigrid algorithms to solve elliptic PDEs on hypercube parallel computers are described and demonstrated. The need for proper data mapping to minimize communication distances is stressed, and an execution-time model is developed to show how algorithm efficiency is affected by changes in the machine and algorithm parameters. Particular attention is then given to the case of coarse computational grids, which can lead to idle processors, load imbalances, and inefficient performance. It is shown that convergence can be improved by using idle processors to solve a new problem concurrently on the fine grid defined by a splitting.
Geochemical mole-balance modeling with uncertain data
Parkhurst, D.L.
1997-01-01
Geochemical mole-balance models are sets of chemical reactions that quantitatively account for changes in the chemical and isotopic composition of water along a flow path. A revised mole-balance formulation that includes an uncertainty term for each chemical and isotopic datum is derived. The revised formulation is comprised of mole-balance equations for each element or element redox state, alkalinity, electrons, solvent water, and each isotope; a charge-balance equation and an equation that relates the uncertainty terms for pH, alkalinity, and total dissolved inorganic carbon for each aqueous solution: inequality constraints on the size of the uncertainty terms: and inequality constraints on the sign of the mole transfer of reactants. The equations and inequality constraints are solved by a modification of the simplex algorithm combined with an exhaustive search for unique combinations of aqueous solutions and reactants for which the equations and inequality constraints can be solved and the uncertainty terms minimized. Additional algorithms find only the simplest mole-balance models and determine the ranges of mixing fractions for each solution and mole transfers for each reactant that are consistent with specified limits on the uncertainty terms. The revised formulation produces simpler and more robust mole-balance models and allows the significance of mixing fractions and mole transfers to be evaluated. In an example from the central Oklahoma aquifer, inclusion of up to 5% uncertainty in the chemical data can reduce the number of reactants in mole-balance models from seven or more to as few as three, these being cation exchange, dolomite dissolution, and silica precipitation. In another example from the Madison aquifer; inclusion of the charge-balance constraint requires significant increases in the mole transfers of calcite, dolomite, and organic matter, which reduce the estimated maximum carbon 14 age of the sample by about 10,000 years, from 22,700 years to
Feed-forward volume rendering algorithm for moderately parallel MIMD machines
NASA Technical Reports Server (NTRS)
Yagel, Roni
1993-01-01
Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.
Ghosal, Dipak; Mueller, Stephen Ng
2005-04-01
With multipath routing in mobile ad hoc networks (MANETs), a source can establish multiple routes to a destination for routing data. In MANETs, mulitpath routing can be used to provide route resilience, smaller end-to-end delay, and better load balancing. However, when the multiple paths are close together, transmissions of different paths may interfere with each other, causing degradation in performance. Besides interference, the physical diversity of paths also improves fault tolerance. We present a purely distributed multipath protocol based on the AODV-Multipath (AODVM) protocol called AODVM with Path Diversity (AODVM/PD) that finds multiple paths with a desired degree of correlation between paths specified as an input parameter to the algorithm. We demonstrate through detailed simulation analysis that multiple paths with low degree of correlation determined by AODVM/PD provides both smaller end-to-end delay than AODVM in networks with low mobility and better route resilience in the presence of correlated node failures.
Citraturic response to oral citric acid load
NASA Technical Reports Server (NTRS)
Sakhaee, K.; Alpern, R.; Poindexter, J.; Pak, C. Y.
1992-01-01
It is possible that some orally administered citrate may appear in urine by escaping oxidation in vivo. To determine whether this mechanism contributes to the citraturic response to potassium citrate, we measured serum and urinary citrate for 4 hours after a single oral load of citric acid (40 mEq.) in 6 normal subjects. Since citric acid does not alter acid-base balance, the effect of absorbed citrate could be isolated from that of alkali load. Serum citrate concentration increased significantly (p less than 0.05) 30 minutes after a single oral dose of citric acid and remained significantly elevated for 3 hours after citric acid load. Commensurate with this change, urinary citrate excretion peaked at 2 hours and gradually decreased during the next 2 hours after citric acid load. In contrast, serum and urinary citrate remained unaltered following the control load (no drug). Differences of the citratemic and citraturic effects between phases were significant (p less than 0.05) at 2 and 3 hours. Urinary pH, carbon dioxide pressure, bicarbonate, total carbon dioxide and ammonium did not change at any time after citric acid load, and did not differ between the 2 phases. No significant difference was noted in serum electrolytes, arterialized venous pH and carbon dioxide pressure at any time after citric acid load and between the 2 phases. Thus, the citraturic and citratemic effects of oral citric acid are largely accountable by provision of absorbed citrate, which has escaped in vivo degradation.
Balance ability and athletic performance.
Hrysomallis, Con
2011-03-01
The relationship between balance ability and sport injury risk has been established in many cases, but the relationship between balance ability and athletic performance is less clear. This review compares the balance ability of athletes from different sports, determines if there is a difference in balance ability of athletes at different levels of competition within the same sport, determines the relationship of balance ability with performance measures and examines the influence of balance training on sport performance or motor skills. Based on the available data from cross-sectional studies, gymnasts tended to have the best balance ability, followed by soccer players, swimmers, active control subjects and then basketball players. Surprisingly, no studies were found that compared the balance ability of rifle shooters with other athletes. There were some sports, such as rifle shooting, soccer and golf, where elite athletes were found to have superior balance ability compared with their less proficient counterparts, but this was not found to be the case for alpine skiing, surfing and judo. Balance ability was shown to be significantly related to rifle shooting accuracy, archery shooting accuracy, ice hockey maximum skating speed and simulated luge start speed, but not for baseball pitching accuracy or snowboarding ranking points. Prospective studies have shown that the addition of a balance training component to the activities of recreationally active subjects or physical education students has resulted in improvements in vertical jump, agility, shuttle run and downhill slalom skiing. A proposed mechanism for the enhancement in motor skills from balance training is an increase in the rate of force development. There are limited data on the influence of balance training on motor skills of elite athletes. When the effectiveness of balance training was compared with resistance training, it was found that resistance training produced superior performance results for
An Energy Efficient Stable Election-Based Routing Algorithm for Wireless Sensor Networks
Wang, Jin; Zhang, Zhongqi; Xia, Feng; Yuan, Weiwei; Lee, Sungyoung
2013-01-01
Sensor nodes usually have limited energy supply and they are impractical to recharge. How to balance traffic load in sensors in order to increase network lifetime is a very challenging research issue. Many clustering algorithms have been proposed recently for wireless sensor networks (WSNs). However, sensor networks with one fixed sink node often suffer from a hot spots problem since nodes near sinks have more traffic burden to forward during a multi-hop transmission process. The use of mobile sinks has been shown to be an effective technique to enhance network performance features such as latency, energy efficiency, network lifetime, etc. In this paper, a modified Stable Election Protocol (SEP), which employs a mobile sink, has been proposed for WSNs with non-uniform node distribution. The decision of selecting cluster heads by the sink is based on the minimization of the associated additional energy and residual energy at each node. Besides, the cluster head selects the shortest path to reach the sink between the direct approach and the indirect approach with the use of the nearest cluster head. Simulation results demonstrate that our algorithm has better performance than traditional routing algorithms, such as LEACH and SEP. PMID:24284767
Performance evaluation of a routing algorithm based on Hopfield Neural Network for network-on-chip
NASA Astrophysics Data System (ADS)
Esmaelpoor, Jamal; Ghafouri, Abdollah
2015-12-01
Network on chip (NoC) has emerged as a solution to overcome the system on chip growing complexity and design challenges. A proper routing algorithm is a key issue of an NoC design. An appropriate routing method balances load across the network channels and keeps path length as short as possible. This survey investigates the performance of a routing algorithm based on Hopfield Neural Network. It is a dynamic programming to provide optimal path and network monitoring in real time. The aim of this article is to analyse the possibility of using a neural network as a router. The algorithm takes into account the path with the lowest delay (cost) form source to destination. In other words, the path a message takes from source to destination depends on network traffic situation at the time and it is the fastest one. The simulation results show that the proposed approach improves average delay, throughput and network congestion efficiently. At the same time, the increase in power consumption is almost negligible.
Micropollutant loads in the urban water cycle.
Musolff, Andreas; Leschik, Sebastian; Reinstorf, Frido; Strauch, Gerhard; Schirmer, Mario
2010-07-01
The assessment of micropollutants in the urban aquatic environment is a challenging task since both the water balance and the contaminant concentrations are characterized by a pronounced variability in time and space. In this study the water balance of a central European urban drainage catchment is quantified for a period of one year. On the basis of a concentration monitoring of several micropollutants, a contaminant mass balance for the study area's wastewater, surface water, and groundwater is derived. The release of micropollutants from the catchment was mainly driven by the discharge of the wastewater treatment plant. However, combined sewer overflows (CSO) released significant loads of caffeine, bisphenol A, and technical 4-nonylphenol. Since an estimated fraction of 9.9-13.0% of the wastewater's dry weather flow was lost as sewer leakages to the groundwater, considerable loads of bisphenol A and technical 4-nonylphenol were also released by the groundwater pathway. The different temporal dynamics of release loads by CSO as an intermittent source and groundwater as well as treated wastewater as continuous pathways may induce acute as well as chronic effects on the receiving aquatic ecosystem. This study points out the importance of the pollution pathway CSO and groundwater for the contamination assessments of urban water resources.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical
Adaptive numerical algorithms in space weather modeling
NASA Astrophysics Data System (ADS)
Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2012-02-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit
Optimization of line configuration and balancing for flexible machining lines
NASA Astrophysics Data System (ADS)
Liu, Xuemei; Li, Aiping; Chen, Zurui
2016-05-01
Line configuration and balancing is to select the type of line and allot a given set of operations as well as machines to a sequence of workstations to realize high-efficiency production. Most of the current researches for machining line configuration and balancing problems are related to dedicated transfer lines with dedicated machine workstations. With growing trends towards great product variety and fluctuations in market demand, dedicated transfer lines are being replaced with flexible machining line composed of identical CNC machines. This paper deals with the line configuration and balancing problem for flexible machining lines. The objective is to assign operations to workstations and find the sequence of execution, specify the number of machines in each workstation while minimizing the line cycle time and total number of machines. This problem is subject to precedence, clustering, accessibility and capacity constraints among the features, operations, setups and workstations. The mathematical model and heuristic algorithm based on feature group strategy and polychromatic sets theory are presented to find an optimal solution. The feature group strategy and polychromatic sets theory are used to establish constraint model. A heuristic operations sequencing and assignment algorithm is given. An industrial case study is carried out, and multiple optimal solutions in different line configurations are obtained. The case studying results show that the solutions with shorter cycle time and higher line balancing rate demonstrate the feasibility and effectiveness of the proposed algorithm. This research proposes a heuristic line configuration and balancing algorithm based on feature group strategy and polychromatic sets theory which is able to provide better solutions while achieving an improvement in computing time.
Development of a 5-Component Balance for Water Tunnel Applications
NASA Technical Reports Server (NTRS)
Suarez, Carlos J.; Kramer, Brian R.; Smith, Brooke C.
1999-01-01
The principal objective of this research/development effort was to develop a multi-component strain gage balance to measure both static and dynamic forces and moments on models tested in flow visualization water tunnels. A balance was designed that allows measuring normal and side forces, and pitching, yawing and rolling moments (no axial force). The balance mounts internally in the model and is used in a manner typical of wind tunnel balances. The key differences between a water tunnel balance and a wind tunnel balance are the requirement for very high sensitivity since the loads are very low (typical normal force is 90 grams or 0.2 lbs), the need for water proofing the gage elements, and the small size required to fit into typical water tunnel models. The five-component balance was calibrated and demonstrated linearity in the responses of the primary components to applied loads, very low interactions between the sections and no hysteresis. Static experiments were conducted in the Eidetics water tunnel with delta wings and F/A-18 models. The data were compared to forces and moments from wind tunnel tests of the same or similar configurations. The comparison showed very good agreement, providing confidence that loads can be measured accurately in the water tunnel with a relatively simple multi-component internal balance. The success of the static experiments encouraged the use of the balance for dynamic experiments. Among the advantages of conducting dynamic tests in a water tunnel are less demanding motion and data acquisition rates than in a wind tunnel test (because of the low-speed flow) and the capability of performing flow visualization and force/moment (F/M) measurements simultaneously with relative simplicity. This capability of simultaneous flow visualization and for F/M measurements proved extremely useful to explain the results obtained during these dynamic tests. In general, the development of this balance should encourage the use of water tunnels for a