Science.gov

Sample records for large-scale nonlinear optimization

  1. Large scale nonlinear programming for the optimization of spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    Arrieta-Camacho, Juan Jose

    . Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.

  2. Large scale nonlinear numerical optimal control for finite element models of flexible structures

    NASA Technical Reports Server (NTRS)

    Shoemaker, Christine A.; Liao, Li-Zhi

    1990-01-01

    This paper discusses the development of large scale numerical optimal control algorithms for nonlinear systems and their application to finite element models of structures. This work is based on our expansion of the optimal control algorithm (DDP) in the following steps: improvement of convergence for initial policies in non-convex regions, development of a numerically accurate penalty function method approach for constrained DDP problems, and parallel processing on supercomputers. The expanded constrained DDP algorithm was applied to the control of a four-bay, two dimensional truss with 12 soft members, which generates geometric nonlinearities. Using an explicit finite element model to describe the structural system requires 32 state variables and 10,000 time steps. Our numerical results indicate that for constrained or unconstrained structural problems with nonlinear dynamics, the results obtained by our expanded constrained DDP are significantly better than those obtained using linear-quadratic feedback control.

  3. On large-scale nonlinear programming techniques for solving optimal control problems

    SciTech Connect

    Faco, J.L.D.

    1994-12-31

    The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.

  4. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  5. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    SciTech Connect

    Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.; Boggs, Paul T.; Ray, Jaideep; van Bloemen Waanders, Bart Gustaaf

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This

  6. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  7. New methods for large scale local and global optimization

    NASA Astrophysics Data System (ADS)

    Byrd, Richard; Schnabel, Robert

    1994-07-01

    We have pursued all three topics described in the proposal during this research period. A large amount of effort has gone into the development of large scale global optimization methods for molecular configuration problems. We have developed new general purpose methods that combine efficient stochastic global optimization techniques with several new, more deterministic techniques that account for most of the computational effort, and the success, of the methods. We have applied our methods to Lennard-Jones problems with up to 75 atoms, to water clusters with up to 31, molecules, and polymers with up to 58 amino acids. The results appear to be the best so far by general purpose optimization methods, and appear to be leading to some interesting chemistry issues. Our research on the second topic, tensor methods, has addressed several areas. We have designed and implemented tensor methods for large sparse systems of nonlinear equations and nonlinear least squares, and have obtained excellent test results on a wide range of problems. We have also developed new tensor methods for nonlinearly constrained optimization problem, and have obtained promising theoretical and preliminary computational results. Finally, on the third topic, limited memory methods for large scale optimization, we have developed and implemented new, extremely efficient limited memory methods for bound constrained problems, and new limited memory trust regions methods, both using our-recently developed compact representations for quasi-Newton matrices. Computational test results for both methods are promising.

  8. Implicit solvers for large-scale nonlinear problems

    SciTech Connect

    Keyes, D E; Reynolds, D; Woodward, C S

    2006-07-13

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications.

  9. A competitive swarm optimizer for large scale optimization.

    PubMed

    Cheng, Ran; Jin, Yaochu

    2015-02-01

    In this paper, a novel competitive swarm optimizer (CSO) for large scale optimization is proposed. The algorithm is fundamentally inspired by the particle swarm optimization but is conceptually very different. In the proposed CSO, neither the personal best position of each particle nor the global best position (or neighborhood best positions) is involved in updating the particles. Instead, a pairwise competition mechanism is introduced, where the particle that loses the competition will update its position by learning from the winner. To understand the search behavior of the proposed CSO, a theoretical proof of convergence is provided, together with empirical analysis of its exploration and exploitation abilities showing that the proposed CSO achieves a good balance between exploration and exploitation. Despite its algorithmic simplicity, our empirical results demonstrate that the proposed CSO exhibits a better overall performance than five state-of-the-art metaheuristic algorithms on a set of widely used large scale optimization problems and is able to effectively solve problems of dimensionality up to 5000. PMID:24860047

  10. Quantum Noise in Large-Scale Coherent Nonlinear Photonic Circuits

    NASA Astrophysics Data System (ADS)

    Santori, Charles; Pelc, Jason S.; Beausoleil, Raymond G.; Tezak, Nikolas; Hamerly, Ryan; Mabuchi, Hideo

    2014-06-01

    A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasiprobability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total and functions as a four-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important property for scalability.

  11. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  12. Global smoothing and continuation for large-scale molecular optimization

    SciTech Connect

    More, J.J.; Wu, Zhijun

    1995-10-01

    We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.

  13. Decomposition and coordination of large-scale operations optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Ruoyu

    Nowadays, highly integrated manufacturing has resulted in more and more large-scale industrial operations. As one of the most effective strategies to ensure high-level operations in modern industry, large-scale engineering optimization has garnered a great amount of interest from academic scholars and industrial practitioners. Large-scale optimization problems frequently occur in industrial applications, and many of them naturally present special structure or can be transformed to taking special structure. Some decomposition and coordination methods have the potential to solve these problems at a reasonable speed. This thesis focuses on three classes of large-scale optimization problems: linear programming, quadratic programming, and mixed-integer programming problems. The main contributions include the design of structural complexity analysis for investigating scaling behavior and computational efficiency of decomposition strategies, novel coordination techniques and algorithms to improve the convergence behavior of decomposition and coordination methods, as well as the development of a decentralized optimization framework which embeds the decomposition strategies in a distributed computing environment. The complexity study can provide fundamental guidelines to practical applications of the decomposition and coordination methods. In this thesis, several case studies imply the viability of the proposed decentralized optimization techniques for real industrial applications. A pulp mill benchmark problem is used to investigate the applicability of the LP/QP decentralized optimization strategies, while a truck allocation problem in the decision support of mining operations is used to study the MILP decentralized optimization strategies.

  14. Nonlinear density fluctuation field theory for large scale structure

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Miao, Hai-Xing

    2009-05-01

    We develop an effective field theory of density fluctuations for a Newtonian self-gravitating N-body system in quasi-equilibrium and apply it to a homogeneous universe with small density fluctuations. Keeping the density fluctuations up to second order, we obtain the nonlinear field equation of 2-pt correlation ξ(r), which contains 3-pt correlation and formal ultra-violet divergences. By the Groth-Peebles hierarchical ansatz and mass renormalization, the equation becomes closed with two new terms beyond the Gaussian approximation, and their coefficients are taken as parameters. The analytic solution is obtained in terms of the hypergeometric functions, which is checked numerically. With one single set of two fixed parameters, the correlation ξ(r) and the corresponding power spectrum P(κ) simultaneously match the results from all the major surveys, such as APM, SDSS, 2dfGRS, and REFLEX. The model gives a unifying understanding of several seemingly unrelated features of large scale structure from a field-theoretical perspective. The theory is worth extending to study the evolution effects in an expanding universe.

  15. A multilevel optimization of large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, M. K.

    1976-01-01

    A multilevel feedback control scheme is proposed for optimization of large-scale systems composed of a number of (not necessarily weakly coupled) subsystems. Local controllers are used to optimize each subsystem, ignoring the interconnections. Then, a global controller may be applied to minimize the effect of interconnections and improve the performance of the overall system. At the cost of suboptimal performance, this optimization strategy ensures invariance of suboptimality and stability of the systems under structural perturbations whereby subsystems are disconnected and again connected during operation.

  16. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  17. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  18. Newton iterative methods for large scale nonlinear systems

    SciTech Connect

    Walker, H.F.; Turner, K.

    1993-01-01

    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  19. Efficient multiobjective optimization scheme for large scale structures

    NASA Astrophysics Data System (ADS)

    Grandhi, Ramana V.; Bharatram, Geetha; Venkayya, V. B.

    1992-09-01

    This paper presents a multiobjective optimization algorithm for an efficient design of large scale structures. The algorithm is based on generalized compound scaling techniques to reach the intersection of multiple functions. Multiple objective functions are treated similar to behavior constraints. Thus, any number of objectives can be handled in the formulation. Pseudo targets on objectives are generated at each iteration in computing the scale factors. The algorithm develops a partial Pareto set. This method is computationally efficient due to the fact that it does not solve many single objective optimization problems in reaching the Pareto set. The computational efficiency is compared with other multiobjective optimization methods, such as the weighting method and the global criterion method. Trusses, plate, and wing structure design cases with stress and frequency considerations are presented to demonstrate the effectiveness of the method.

  20. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  1. The GRG approach for large-scale optimization

    SciTech Connect

    Drud, A.

    1994-12-31

    The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.

  2. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model

  3. Nonlinear Generation of shear flows and large scale magnetic fields by small scale

    NASA Astrophysics Data System (ADS)

    Aburjania, G.

    2009-04-01

    EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge

  4. Large-scale optimal sensor array management for target tracking

    NASA Astrophysics Data System (ADS)

    Tharmarasa, Ratnasingham; Kirubarajan, Thiagalingam; Hernandez, Marcel L.

    2004-01-01

    Large-scale sensor array management has applications in a number of target tracking problems. For example, in ground target tracking, hundreds or even thousands of unattended ground sensors (UGS) may be dropped over a large surveillance area. At any one time it may then only be possible to utilize a very small number of the available sensors at the fusion center because of bandwidth limitations. A similar situation may arise in tracking sea surface or underwater targets using a large number of sonobuoys. The general problem is then to select a subset of the available sensors in order to optimize tracking performance. The Posterior Cramer-Rao Lower Bound (PCRLB), which quantifies the obtainable accuracy of target state estimation, is used as the basis for network management. In a practical scenario with even hundreds of sensors, the number of possible sensor combinations would make it impossible to enumerate all possibilities in real-time. Efficient local (or greedy) search techniques must then be used to make the computational load manageable. In this paper we introduce an efficient search strategy for selecting a subset of the sensor array for use during each sensor change interval in multi-target tracking. Simulation results illustrating the performance of the sensor array manager are also presented.

  5. Large-scale optimal sensor array management for target tracking

    NASA Astrophysics Data System (ADS)

    Tharmarasa, Ratnasingham; Kirubarajan, Thiagalingam; Hernandez, Marcel L.

    2003-12-01

    Large-scale sensor array management has applications in a number of target tracking problems. For example, in ground target tracking, hundreds or even thousands of unattended ground sensors (UGS) may be dropped over a large surveillance area. At any one time it may then only be possible to utilize a very small number of the available sensors at the fusion center because of bandwidth limitations. A similar situation may arise in tracking sea surface or underwater targets using a large number of sonobuoys. The general problem is then to select a subset of the available sensors in order to optimize tracking performance. The Posterior Cramer-Rao Lower Bound (PCRLB), which quantifies the obtainable accuracy of target state estimation, is used as the basis for network management. In a practical scenario with even hundreds of sensors, the number of possible sensor combinations would make it impossible to enumerate all possibilities in real-time. Efficient local (or greedy) search techniques must then be used to make the computational load manageable. In this paper we introduce an efficient search strategy for selecting a subset of the sensor array for use during each sensor change interval in multi-target tracking. Simulation results illustrating the performance of the sensor array manager are also presented.

  6. Operational optimization of large-scale parallel-unit SWRO desalination plant using differential evolution algorithm.

    PubMed

    Wang, Jian; Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180

  7. Operational Optimization of Large-Scale Parallel-Unit SWRO Desalination Plant Using Differential Evolution Algorithm

    PubMed Central

    Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180

  8. Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.

    PubMed

    Chen, Mou; Tao, Gang

    2016-08-01

    In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems. PMID:26340792

  9. Iterative methods for large scale nonlinear and linear systems. Final report, 1994--1996

    SciTech Connect

    Walker, H.F.

    1997-09-01

    The major goal of this research has been to develop improved numerical methods for the solution of large-scale systems of linear and nonlinear equations, such as occur almost ubiquitously in the computational modeling of physical phenomena. The numerical methods of central interest have been Krylov subspace methods for linear systems, which have enjoyed great success in many large-scale applications, and newton-Krylov methods for nonlinear problems, which use Krylov subspace methods to solve approximately the linear systems that characterize Newton steps. Krylov subspace methods have undergone a remarkable development over the last decade or so and are now very widely used for the iterative solution of large-scale linear systems, particularly those that arise in the discretization of partial differential equations (PDEs) that occur in computational modeling. Newton-Krylov methods have enjoyed parallel success and are currently used in many nonlinear applications of great scientific and industrial importance. In addition to their effectiveness on important problems, Newton-Krylov methods also offer a nonlinear framework within which to transfer to the nonlinear setting any advances in Krylov subspace methods or preconditioning techniques, or new algorithms that exploit advanced machine architectures. This research has resulted in a number of improved Krylov and Newton-Krylov algorithms together with applications of these to important linear and nonlinear problems.

  10. Recent developments in large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Venkayya, Vipperla B.

    1989-01-01

    A brief discussion is given of mathematical optimization and the motivation for the development of more recent numerical search procedures. A review of recent developments and issues in multidisciplinary optimization is also presented. These developments are discussed in the context of the preliminary design of aircraft structures. A capability description of programs FASTOP, TSO, STARS, LAGRANGE, ELFINI and ASTROS is included.

  11. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    SciTech Connect

    Liao, Ben-Shan; Bai, Zhaojun; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  12. Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations

    NASA Astrophysics Data System (ADS)

    Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila

    2016-07-01

    It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.

  13. Analysis of some large-scale nonlinear stochastic dynamic systems with subspace-EPC method

    NASA Astrophysics Data System (ADS)

    Er, GuoKang; Iu, VaiPan

    2011-09-01

    The probabilistic solutions to some nonlinear stochastic dynamic (NSD) systems with various polynomial types of nonlinearities in displacements are analyzed with the subspace-exponential polynomial closure (subspace-EPC) method. The space of the state variables of the large-scale nonlinear stochastic dynamic system excited by Gaussian white noises is separated into two subspaces. Both sides of the Fokker-Planck-Kolmogorov (FPK) equation corresponding to the NSD system are then integrated over one of the subspaces. The FPK equation for the joint probability density function of the state variables in the other subspace is formulated. Therefore, the FPK equations in low dimensions are obtained from the original FPK equation in high dimensions and the FPK equations in low dimensions are solvable with the exponential polynomial closure method. Examples about multi-degree-offreedom NSD systems with various polynomial types of nonlinearities in displacements are given to show the effectiveness of the subspace-EPC method in these cases.

  14. On the importance of nonlinear couplings in large-scale neutrino streams

    NASA Astrophysics Data System (ADS)

    Dupuy, Hélène; Bernardeau, Francis

    2015-08-01

    We propose a procedure to evaluate the impact of nonlinear couplings on the evolution of massive neutrino streams in the context of large-scale structure growth. Such streams can be described by general nonlinear conservation equations, derived from a multiple-flow perspective, which generalize the conservation equations of non-relativistic pressureless fluids. The relevance of the nonlinear couplings is quantified with the help of the eikonal approximation applied to the subhorizon limit of this system. It highlights the role played by the relative displacements of different cosmic streams and it specifies, for each flow, the spatial scales at which the growth of structure is affected by nonlinear couplings. We found that, at redshift zero, such couplings can be significant for wavenumbers as small as k=0.2 h/Mpc for most of the neutrino streams.

  15. Tensor-Krylov methods for solving large-scale systems of nonlinear equations.

    SciTech Connect

    Bader, Brett William

    2004-08-01

    This paper develops and investigates iterative tensor methods for solving large-scale systems of nonlinear equations. Direct tensor methods for nonlinear equations have performed especially well on small, dense problems where the Jacobian matrix at the solution is singular or ill-conditioned, which may occur when approaching turning points, for example. This research extends direct tensor methods to large-scale problems by developing three tensor-Krylov methods that base each iteration upon a linear model augmented with a limited second-order term, which provides information lacking in a (nearly) singular Jacobian. The advantage of the new tensor-Krylov methods over existing large-scale tensor methods is their ability to solve the local tensor model to a specified accuracy, which produces a more accurate tensor step. The performance of these methods in comparison to Newton-GMRES and tensor-GMRES is explored on three Navier-Stokes fluid flow problems. The numerical results provide evidence that tensor-Krylov methods are generally more robust and more efficient than Newton-GMRES on some important and difficult problems. In addition, the results show that the new tensor-Krylov methods and tensor- GMRES each perform better in certain situations.

  16. The compressed state Kalman filter for nonlinear state estimation: Application to large-scale reservoir monitoring

    NASA Astrophysics Data System (ADS)

    Li, Judith Yue; Kokkinaki, Amalia; Ghorbanidehno, Hojat; Darve, Eric F.; Kitanidis, Peter K.

    2015-12-01

    Reservoir monitoring aims to provide snapshots of reservoir conditions and their uncertainties to assist operation management and risk analysis. These snapshots may contain millions of state variables, e.g., pressures and saturations, which can be estimated by assimilating data in real time using the Kalman filter (KF). However, the KF has a computational cost that scales quadratically with the number of unknowns, m, due to the cost of computing and storing the covariance and Jacobian matrices, along with their products. The compressed state Kalman filter (CSKF) adapts the KF for solving large-scale monitoring problems. The CSKF uses N preselected orthogonal bases to compute an accurate rank-N approximation of the covariance that is close to the optimal spectral approximation given by SVD. The CSKF has a computational cost that scales linearly in m and uses an efficient matrix-free approach that propagates uncertainties using N + 1 forward model evaluations, where N≪m. Here we present a generalized CSKF algorithm for nonlinear state estimation problems such as CO2 monitoring. For simultaneous estimation of multiple types of state variables, the algorithm allows selecting bases that represent the variability of each state type. Through synthetic numerical experiments of CO2 monitoring, we show that the CSKF can reproduce the Kalman gain accurately even for large compression ratios (m/N). For a given computational cost, the CSKF uses a robust and flexible compression scheme that gives more reliable uncertainty estimates than the ensemble Kalman filter, which may display loss of ensemble variability leading to suboptimal uncertainty estimates.

  17. Computation of Large-Scale Structure Jet Noise Sources With Weak Nonlinear Effects Using Linear Euler

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.

    2003-01-01

    An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.

  18. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. PMID:12820130

  19. Simulation and Optimization of Large Scale Subsurface Environmental Impacts; Investigations, Remedial Design and Long Term Monitoring

    SciTech Connect

    Deschaine, L.M.

    2008-07-01

    The global impact to human health and the environment from large scale chemical / radionuclide releases is well documented. Examples are the wide spread release of radionuclides from the Chernobyl nuclear reactors, the mobilization of arsenic in Bangladesh, the formation of Environmental Protection Agencies in the United States, Canada and Europe, and the like. The fiscal costs of addressing and remediating these issues on a global scale are astronomical, but then so are the fiscal and human health costs of ignoring them. An integrated methodology for optimizing the response(s) to these issues is needed. This work addresses development of optimal policy design for large scale, complex, environmental issues. It discusses the development, capabilities, and application of a hybrid system of algorithms that optimizes the environmental response. It is important to note that 'optimization' does not singularly refer to cost minimization, but to the effective and efficient balance of cost, performance, risk, management, and societal priorities along with uncertainty analysis. This tool integrates all of these elements into a single decision framework. It provides a consistent approach to designing optimal solutions that are tractable, traceable, and defensible. The system is modular and scalable. It can be applied either as individual components or in total. By developing the approach in a complex systems framework, a solution methodology represents a significant improvement over the non-optimal 'trial and error' approach to environmental response(s). Subsurface environmental processes are represented by linear and non-linear, elliptic and parabolic equations. The state equations solved using numerical methods include multi-phase flow (water, soil gas, NAPL), and multicomponent transport (radionuclides, heavy metals, volatile organics, explosives, etc.). Genetic programming is used to generate the simulators either when simulation models do not exist, or to extend the

  20. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  1. CMB lensing bispectrum from nonlinear growth of the large scale structure

    NASA Astrophysics Data System (ADS)

    Namikawa, Toshiya

    2016-06-01

    We discuss detectability of the nonlinear growth of the large-scale structure in the cosmic microwave background (CMB) lensing. The lensing signals involved in the CMB fluctuations have been measured from multiple CMB experiments, such as Atacama Cosmology Telescope (ACT), Planck, POLARBEAR, and South Pole Telescope (SPT). The reconstructed CMB lensing signals are useful to constrain cosmology via their angular power spectrum, while detectability and cosmological application of their bispectrum induced by the nonlinear evolution are not well studied. Extending the analytic estimate of the galaxy lensing bispectrum presented by Takada and Jain (2004) to the CMB case, we show that even near term CMB experiments such as Advanced ACT, Simons Array and SPT3G could detect the CMB lensing bispectrum induced by the nonlinear growth of the large-scale structure. In the case of the CMB Stage-IV, we find that the lensing bispectrum is detectable at ≳50 σ statistical significance. This precisely measured lensing bispectrum has rich cosmological information, and could be used to constrain cosmology, e.g., the sum of the neutrino masses and the dark-energy properties.

  2. Towards a self-consistent halo model for the nonlinear large-scale structure

    NASA Astrophysics Data System (ADS)

    Schmidt, Fabian

    2016-03-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: (i) they do not enforce the stress-energy conservation of matter; (ii) they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model (EHM) that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed on large scales, and results of the perturbation theory and the effective field theory can, in principle, be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written here, this approach still does not describe the transition regime between perturbation theory and halo scales realistically, which is left as an open problem. We also show explicitly that, when implemented consistently, halo model predictions do not depend on any properties of low-mass halos that are smaller than the scales of interest.

  3. Classification of large-scale stellar spectra based on the non-linearly assembling learning machine

    NASA Astrophysics Data System (ADS)

    Liu, Zhongbao; Song, Lipeng; Zhao, Wenjuan

    2016-02-01

    An important problem to be solved of traditional classification methods is they cannot deal with large-scale classification because of very high time complexity. In order to solve above problem, inspired by the thinking of collaborative management, the non-linearly assembling learning machine (NALM) is proposed and used in the large-scale stellar spectral classification. In NALM, the large-scale dataset is firstly divided into several subsets, and then the traditional classifiers such as support vector machine (SVM) runs on the subset, finally, the classification results on each subset are assembled and the overall classification decision is obtained. In comparative experiments, we investigate the performance of NALM in the stellar spectral subclasses classification compared with SVM. We apply SVM and NALM respectively to classify the four subclasses of K-type spectra, three subclasses of F-type spectra and three subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS). The comparative experiment results show that the performance of NALM is much better than SVM in view of the classification accuracy and the computation time.

  4. Destruction of large-scale magnetic field in non-linear simulations of the shear dynamo

    NASA Astrophysics Data System (ADS)

    Teed, Robert J.; Proctor, Michael R. E.

    2016-05-01

    The Sun's magnetic field exhibits coherence in space and time on much larger scales than the turbulent convection that ultimately powers the dynamo. In the past the α-effect (mean-field) concept has been used to model the solar cycle, but recent work has cast doubt on the validity of the mean-field ansatz under solar conditions. This indicates that one should seek an alternative mechanism for generating large-scale structure. One possibility is the recently proposed `shear dynamo' mechanism where large-scale magnetic fields are generated in the presence of a simple shear. Further investigation of this proposition is required, however, because work has been focused on the linear regime with a uniform shear profile thus far. In this paper we report results of the extension of the original shear dynamo model into the non-linear regime. We find that whilst large-scale structure can initially persist into the saturated regime, in several of our simulations it is destroyed via large increase in kinetic energy. This result casts doubt on the ability of the simple uniform shear dynamo mechanism to act as an alternative to the α-effect in solar conditions.

  5. Toward Optimal and Scalable Dimension Reduction Methods for large-scale Bayesian Inversions

    NASA Astrophysics Data System (ADS)

    Bousserez, N.; Henze, D. K.

    2015-12-01

    Many inverse problems in geophysics are solved within the Bayesian framework, in which a prior probability density function of a quantity of interest is optimally updated using newly available observations. A maximum likelihood of the posterior probability density function is estimated using a model of the physics that relates the variables to be optimized to the observations. However, in many practical situations the number of observations is much smaller than the number of variables estimated, which leads to an ill-posed problem. In practice, this means that the data are informative only in a subspace of the initial space. It is both of theoretical and practical interest to characterize this "data-informed" subspace, since it allows a simple interpretation of the inverse solution and its uncertainty, but can also dramatically reduce the computational cost of the optimization by reducing the size of the problem. In this presentation the formalism of dimension reduction in Bayesian methods will be introduced, and different optimality criteria will be discussed (e.g., minimum error variances, maximum degree of freedom for signal). For each criterion, an optimal design for the reduced Bayesian problem will be proposed and compared with other suboptimal approaches. A significant advantage of our method is its high scalability owing to an efficient parallel implementation, making it very attractive for large-scale inverse problems. Numerical results from an Observation Simulation System Experiment (OSSE) consisting of a high spatial resolution (0.5°x0.7°) source inversion of methane over North America using observations from the Greenhouse gases Observing SATellite (GOSAT) instrument and the GEOS-Chem chemistry-transport model will illustrate the computational efficiency of our approach. Although only linear models are considered in this study, possible extensions to the non-linear case will also be discussed

  6. The topology of large-scale structure. II - Nonlinear evolution of Gaussian models

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Weinberg, David H.; Gott, J. Richard, III

    1988-01-01

    The evolution of non-Gaussian behavior in the large-scale universe from Gaussian initial conditions is studied. Topology measures developed in previous papers are applied to the smoothed initial, final, and biased matter distributions of cold dark matter, white noise, and massive neutrino simulations. When the smoothing length is approximately twice the mass correlation length or larger, the evolved models look like the initial conditions, suggesting that random phase hypotheses in cosmology can be tested with adequate data sets. When a smaller smoothing length is used, nonlinear effects are recovered, so nonlinear effects on topology can be detected in redshift surveys after smoothing at the mean intergalaxy separation. Hot dark matter models develop manifestly non-Gaussian behavior attributable to phase correlations, with a topology reminiscent of bubble or sheet distributions. Cold dark matter models remain Gaussian, and biasing does not disguise this.

  7. Galilean invariance and the consistency relation for the nonlinear squeezed bispectrum of large scale structure

    SciTech Connect

    Peloso, Marco; Pietroni, Massimo E-mail: pietroni@pd.infn.it

    2013-05-01

    We discuss the constraints imposed on the nonlinear evolution of the Large Scale Structure (LSS) of the universe by galilean invariance, the symmetry relevant on subhorizon scales. Using Ward identities associated to the invariance, we derive fully nonlinear consistency relations between statistical correlators of the density and velocity perturbations, such as the power spectrum and the bispectrum. These relations are valid up to O(f{sub NL}{sup 2}) corrections. We then show that most of the semi-analytic methods proposed so far to resum the perturbative expansion of the LSS dynamics fail to fulfill the constraints imposed by galilean invariance, and are therefore susceptible to non-physical infrared effects. Finally, we identify and discuss a nonperturbative semi-analytical scheme which is manifestly galilean invariant at any order of its expansion.

  8. Nonlinear Seismic Correlation Analysis of the JNES/NUPEC Large-Scale Piping System Tests.

    SciTech Connect

    Nie,J.; DeGrassi, G.; Hofmayer, C.; Ali, S.

    2008-06-01

    The Japan Nuclear Energy Safety Organization/Nuclear Power Engineering Corporation (JNES/NUPEC) large-scale piping test program has provided valuable new test data on high level seismic elasto-plastic behavior and failure modes for typical nuclear power plant piping systems. The component and piping system tests demonstrated the strain ratcheting behavior that is expected to occur when a pressurized pipe is subjected to cyclic seismic loading. Under a collaboration agreement between the US and Japan on seismic issues, the US Nuclear Regulatory Commission (NRC)/Brookhaven National Laboratory (BNL) performed a correlation analysis of the large-scale piping system tests using derailed state-of-the-art nonlinear finite element models. Techniques are introduced to develop material models that can closely match the test data. The shaking table motions are examined. The analytical results are assessed in terms of the overall system responses and the strain ratcheting behavior at an elbow. The paper concludes with the insights about the accuracy of the analytical methods for use in performance assessments of highly nonlinear piping systems under large seismic motions.

  9. From Self-consistency to SOAR: Solving Large Scale NonlinearEigenvalue Problems

    SciTech Connect

    Bai, Zhaojun; Yang, Chao

    2006-02-01

    What is common among electronic structure calculation, design of MEMS devices, vibrational analysis of high speed railways, and simulation of the electromagnetic field of a particle accelerator? The answer: they all require solving large scale nonlinear eigenvalue problems. In fact, these are just a handful of examples in which solving nonlinear eigenvalue problems accurately and efficiently is becoming increasingly important. Recognizing the importance of this class of problems, an invited minisymposium dedicated to nonlinear eigenvalue problems was held at the 2005 SIAM Annual Meeting. The purpose of the minisymposium was to bring together numerical analysts and application scientists to showcase some of the cutting edge results from both communities and to discuss the challenges they are still facing. The minisymposium consisted of eight talks divided into two sessions. The first three talks focused on a type of nonlinear eigenvalue problem arising from electronic structure calculations. In this type of problem, the matrix Hamiltonian H depends, in a non-trivial way, on the set of eigenvectors X to be computed. The invariant subspace spanned by these eigenvectors also minimizes a total energy function that is highly nonlinear with respect to X on a manifold defined by a set of orthonormality constraints. In other applications, the nonlinearity of the matrix eigenvalue problem is restricted to the dependency of the matrix on the eigenvalues to be computed. These problems are often called polynomial or rational eigenvalue problems In the second session, Christian Mehl from Technical University of Berlin described numerical techniques for solving a special type of polynomial eigenvalue problem arising from vibration analysis of rail tracks excited by high-speed trains.

  10. Large-scale computational simulation for optimal design of curved piezoelectric actuator using composite material

    NASA Astrophysics Data System (ADS)

    Chung, Soon Wan; Hwang, In Seong; Kim, Seung Jo

    2004-07-01

    In this paper, the electromechanical displacements of curved piezoelectric actuators with laminated composite material are calculated using high performance computing technology, and the optimal configuration of composite curved actuator is proposed. To predict the pre-stress in the device due to the mismatch in coefficients of thermal expansion, carbon-epoxy and glass-epoxy as well as PZT ceramic are numerically modeled by using hexahedral solid elements. Because the modeling of these thin layers causes the numbers of degree of freedom to increase, large-scale structural analyses are performed through the PEGASUS supercomputer which is composed of 400 Intel Xeon CPUs. In the first stage, the curved shape of the actuator and the internal stress in each layer are obtained by the cured curvature analysis. Subsequently, the displacement due to the piezoelectric force by an applied voltage is also calculated and the performance of composite curved actuator is investigated by comparing the displacements according to the configuration of the actuator. To consider the finite deformation in the first stage and include the pre-stress in each layer in the second analysis stage, nonlinear finite element analyses will be carried out. The thickness and the elastic constants of laminated composite are chosen as design factors.

  11. Missing link: A nonlinear post-Friedmann framework for small and large scales

    NASA Astrophysics Data System (ADS)

    Milillo, Irene; Bertacca, Daniele; Bruni, Marco; Maselli, Andrea

    2015-07-01

    We present a nonlinear post-Friedmann framework for structure formation, generalizing to cosmology the weak-field (post-Minkowskian) approximation, unifying the treatment of small and large scales. We consider a universe filled with a pressureless fluid and a cosmological constant Λ , the theory of gravity is Einstein's general relativity and the background is the standard flat Λ CDM cosmological model. We expand the metric and the energy-momentum tensor in powers of 1 /c , keeping the matter density and peculiar velocity as exact fundamental variables. We assume the Poisson gauge, including scalar and tensor modes up to 1 /c4 order and vector modes up to 1 /c5 terms. Through a redefinition of the scalar potentials as a resummation of the metric contributions at different orders, we obtain a complete set of nonlinear equations, providing a unified framework to study structure formation from small to superhorizon scales, from the nonlinear Newtonian to the linear relativistic regime. We explicitly show the validity of our scheme in the two limits: at leading order we recover the fully nonlinear equations of Newtonian cosmology; when linearized, our equations become those for scalar and vector modes of first-order relativistic perturbation theory in the Poisson gauge. Tensor modes are nondynamical at the 1 /c4 order we consider (gravitational waves only appear at higher order): they are purely nonlinear and describe a distortion of the spatial slices determined at this order by a constraint, quadratic in the scalar and vector variables. The main results of our analysis are as follows: (a) at leading order a purely Newtonian nonlinear energy current sources a frame-dragging gravitomagnetic vector potential, and (b) in the leading-order Newtonian regime and in the linear relativistic regime, the two scalar metric potentials are the same, while the nonlinearity of general relativity makes them different. Possible applications of our formalism include the calculations

  12. Solving Large-scale Spatial Optimization Problems in Water Resources Management through Spatial Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Wang, J.; Cai, X.

    2007-12-01

    A water resources system can be defined as a large-scale spatial system, within which distributed ecological system interacts with the stream network and ground water system. Water resources management, the causative factors and hence the solutions to be developed have a significant spatial dimension. This motivates a modeling analysis of water resources management within a spatial analytical framework, where data is usually geo- referenced and in the form of a map. One of the important functions of Geographic information systems (GIS) is to identify spatial patterns of environmental variables. The role of spatial patterns in water resources management has been well established in the literature particularly regarding how to design better spatial patterns for satisfying the designated objectives of water resources management. Evolutionary algorithms (EA) have been demonstrated to be successful in solving complex optimization models for water resources management due to its flexibility to incorporate complex simulation models in the optimal search procedure. The idea of combining GIS and EA motivates the development and application of spatial evolutionary algorithms (SEA). SEA assimilates spatial information into EA, and even changes the representation and operators of EA. In an EA used for water resources management, the mathematical optimization model should be modified to account the spatial patterns; however, spatial patterns are usually implicit, and it is difficult to impose appropriate patterns to spatial data. Also it is difficult to express complex spatial patterns by explicit constraints included in the EA. The GIS can help identify the spatial linkages and correlations based on the spatial knowledge of the problem. These linkages are incorporated in the fitness function for the preference of the compatible vegetation distribution. Unlike a regular GA for spatial models, the SEA employs a special hierarchical hyper-population and spatial genetic operators

  13. Large-Scale Structure Formation: From the First Non-linear Objects to Massive Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Planelles, S.; Schleicher, D. R. G.; Bykov, A. M.

    2015-05-01

    The large-scale structure of the Universe formed from initially small perturbations in the cosmic density field, leading to galaxy clusters with up to 1015 M⊙ at the present day. Here, we review the formation of structures in the Universe, considering the first primordial galaxies and the most massive galaxy clusters as extreme cases of structure formation where fundamental processes such as gravity, turbulence, cooling and feedback are particularly relevant. The first non-linear objects in the Universe formed in dark matter halos with 105-108 M⊙ at redshifts 10-30, leading to the first stars and massive black holes. At later stages, larger scales became non-linear, leading to the formation of galaxy clusters, the most massive objects in the Universe. We describe here their formation via gravitational processes, including the self-similar scaling relations, as well as the observed deviations from such self-similarity and the related non-gravitational physics (cooling, stellar feedback, AGN). While on intermediate cluster scales the self-similar model is in good agreement with the observations, deviations from such self-similarity are apparent in the core regions, where numerical simulations do not reproduce the current observational results. The latter indicates that the interaction of different feedback processes may not be correctly accounted for in current simulations. Both in the most massive clusters of galaxies as well as during the formation of the first objects in the Universe, turbulent structures and shock waves appear to be common, suggesting them to be ubiquitous in the non-linear regime.

  14. A Limited-Memory BFGS Algorithm Based on a Trust-Region Quadratic Model for Large-Scale Nonlinear Equations

    PubMed Central

    Li, Yong; Yuan, Gonglin; Wei, Zengxin

    2015-01-01

    In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method. PMID:25950725

  15. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  16. Mathematical methods in material science and large scale optimization workshops: Final report, June 1, 1995-November 30, 1996

    SciTech Connect

    Friedman, A.

    1996-12-01

    The summer program in Large Scale Optimization concentrated largely on process engineering, aerospace engineering, inverse problems and optimal design, and molecular structure and protein folding. The program brought together application people, optimizers, and mathematicians with interest in learning about these topics. Three proceedings volumes are being prepared. The year in Materials Sciences deals with disordered media and percolation, phase transformations, composite materials, microstructure; topological and geometric methods as well as statistical mechanics approach to polymers (included were Monte Carlo simulation for polymers); miscellaneous other topics such as nonlinear optical material, particulate flow, and thin film. All these activities saw strong interaction among material scientists, mathematicians, physicists, and engineers. About 8 proceedings volumes are being prepared.

  17. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  18. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  19. Integration of Large-Scale Optimization and Game Theory for Sustainable Water Quality Management

    NASA Astrophysics Data System (ADS)

    Tsao, J.; Li, J.; Chou, C.; Tung, C.

    2009-12-01

    Sustainable water quality management requires total mass control in pollutant discharge based on both the principles of not exceeding assimilative capacity in a river and equity among generations. The stream assimilative capacity is the carrying capacity of a river for the maximum waste load without violating the water quality standard and the spirit of total mass control is to optimize the waste load allocation in subregions. For the goal of sustainable watershed development, this study will use large-scale optimization theory to optimize the profit, and find the marginal values of loadings as reference of the fair price and then the best way to get the equilibrium by water quality trading for the whole of watershed will be found. On the other hand, game theory plays an important role to maximize both individual and entire profits. This study proves the water quality trading market is available in some situation, and also makes the whole participants get a better outcome.

  20. Large scale test simulations using the Virtual Environment for Test Optimization (VETO)

    SciTech Connect

    Klenke, S.E.; Heffelfinger, S.R.; Bell, H.J.; Shierling, C.L.

    1997-10-01

    The Virtual Environment for Test Optimization (VETO) is a set of simulation tools under development at Sandia to enable test engineers to do computer simulations of tests. The tool set utilizes analysis codes and test information to optimize design parameters and to provide an accurate model of the test environment which aides in the maximization of test performance, training, and safety. Previous VETO effort has included the development of two structural dynamics simulation modules that provide design and optimization tools for modal and vibration testing. These modules have allowed test engineers to model and simulate complex laboratory testing, to evaluate dynamic response behavior, and to investigate system testability. Further development of the VETO tool set will address the accurate modeling of large scale field test environments at Sandia. These field test environments provide weapon system certification capabilities and have different simulation requirements than those of laboratory testing.

  1. A new approach for optimal VAR sources planning in large scale electric power systems

    SciTech Connect

    Yingtung Hsiao; Chunchang Liu; Yuanlin Chen . Dept. of Electrical Engineering); Hsiodong Chiang )

    1993-08-01

    This paper presents a new approach for contingency-constrained optimal reactive volt amper (VAR) sources planning in large-scale power systems. Features distinguishing the proposed approach from many of the existing methods include that it allows a more realistic problem formulation, and that it can find the (global) optimal solution. The new problem formulation takes into consideration practical aspects of VAR sources; the load constraints and operational constraints at different load levels. This solution methodology based on simulated annealing determines (1) the location to install VAR sources; (2) the types and sizes of VAR sources to be installed; and (3) the settings of VAR sources at different loading conditions. In order to speed up the solution algorithm, this paper makes a slight modification of the fast decoupled load flow and incorporates it into the solution algorithm. This method is suitable for large-scale power systems and has been tested on several power systems with promising results. Simulation results on the IEEE 30 buses system and the Tai-power 358 buses system are presented.

  2. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  3. Newton iterative methods for large scale nonlinear systems. Progress report, 1992--1993

    SciTech Connect

    Walker, H.F.; Turner, K.

    1993-06-01

    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  4. Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)

    2002-01-01

    In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."

  5. Asymptotically Optimal Transmission Policies for Large-Scale Low-Power Wireless Sensor Networks

    SciTech Connect

    I. Ch. Paschalidis; W. Lai; D. Starobinski

    2007-02-01

    We consider wireless sensor networks with multiple gateways and multiple classes of traffic carrying data generated by different sensory inputs. The objective is to devise joint routing, power control and transmission scheduling policies in order to gather data in the most efficient manner while respecting the needs of different sensing tasks (fairness). We formulate the problem as maximizing the utility of transmissions subject to explicit fairness constraints and propose an efficient decomposition algorithm drawing upon large-scale decomposition ideas in mathematical programming. We show that our algorithm terminates in a finite number of iterations and produces a policy that is asymptotically optimal at low transmission power levels. Furthermore, we establish that the utility maximization problem we consider can, in principle, be solved in polynomial time. Numerical results show that our policy is near-optimal, even at high power levels, and far superior to the best known heuristics at low power levels. We also demonstrate how to adapt our algorithm to accommodate energy constraints and node failures. The approach we introduce can efficiently determine near-optimal transmission policies for dramatically larger problem instances than an alternative enumeration approach.

  6. Optimal short-term scheduling for a large-scale cascaded hydro system

    SciTech Connect

    Piekutowski, M.R.; Litwinowicz, T. ); Frowd, R.J. )

    1994-05-01

    This paper describes a short term hydro generation optimization program that has been developed by the Hydro Electric Commission (HEC) to determine optimal generation schedules and to investigate export and import capabilities of the Tasmanian system under a proposed DC interconnection with mainland Australia. The optimal hydro scheduling problem is formulated as a large scale linear programming algorithm and is solved using a commercially-available linear programming package. The selected objective function requires minimization of the value of energy used by turbines and spilled during the study period. Alternative formulations of the objective function are also discussed. The system model incorporates the following elements: hydro station (turbine efficiency, turbine flow limits, penstock head losses, tailrace elevation and generator losses), hydro system (reservoirs and hydro network: active volume, spillway flow, flow between reservoirs and travel time), and other models including thermal plant and DC link. A valuable by-product of the linear programming solution is system and unit incremental costs which may be used for interchange scheduling and short-term generation dispatch.

  7. A computer package for optimal multi-objective VAR planning in large scale power systems

    SciTech Connect

    Chiang, H.D. . School of Electrical Engineering); Liu, C.C.; Chen, Y.L. . Dept. of Electrical Engineering); Hsiao, Y.T.

    1994-05-01

    This paper presents a simulated annealing based computer package for multi-objective, VAR planning in large scale power systems - SAMVAR. This computer package has three distinct features. First, the optimal VAR planning is reformulated as a constrained, multi-objective, non-differentiable optimization problem. The new formulation considers four different objective functions related to system investment, system operational efficiency, system security and system service quality. The new formulation also takes into consideration load, operation and contingency constraints. Second, it allows both the objective functions and equality and inequality constraints to be non-differentiable; making the problem formulation more realistic. Third, the package employs a two-stage solution algorithm based on an extended simulated annealing technique and the [var epsilon]-constraint method. The first-stage of the solution algorithm uses an extended simulated annealing technique to find a global, non-inferior solution. The results obtained from the first stage provide a basis for planners to prioritize the objective functions such that a primary objective function is chosen and tradeoff tolerances for the other objective functions are set. The primary objective function and the trade-off tolerances are then used to transform the constrained multi-objective optimization problem into a single-objective optimization problem with more constraints by employing the [var epsilon]-constraint method. The second-stage uses the simulated annealing technique to find the global optimal solution. A salient feature of SAMVAR is that it allows planners to find an acceptable, global non-inferior solution for the VAR problem. Simulation results indicate that SAMVAR has the ability to handle the multi-objective VAR planning problem and meet with the planner's requirements.

  8. HGO-based decentralised indirect adaptive fuzzy control for a class of large-scale nonlinear systems

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Shao; Chen, Xiaoxin; Zhou, Shao-Wu; Yu, Ling-Li; Wang, Zheng-Wu

    2012-06-01

    In this article, a novel high gain observer (HGO)-based decentralised indirect adaptive fuzzy controller is developed for a class of uncertain affine large-scale nonlinear systems. By the combination of fuzzy logic systems and an HGO, the state variables are not required to be measurable. The proposed feedback and adaptation mechanisms guarantee that each subsystem is able to adaptively compensate for interconnections and disturbances with unknown bounds. It is ascertained using a singular perturbation method that all the signals of the closed-loop large-scale system stand uniformly ultimately bounded and the tracking errors converge to tunable neighbourhoods of the origin. Simulation results of correlated double inverted pendulums substantiate the effectiveness of the proposed controller.

  9. Optimization and Scalability of an Large-scale Earthquake Simulation Application

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Olsen, K. B.; Hu, Y.; Day, S.; Dalguer, L. A.; Minster, B.; Moore, R.; Zhu, J.; Maechling, P.; Jordan, T.

    2006-12-01

    In 2004, the Southern California Earthquake Center (SCEC) initiated a major large-scale earthquake simulation, called TeraShake. The TeraShake propagated seismic waves across a domain of 600 km by 300 km by 80 km at 200 meter resolution and 1.8 billion grid points, some of the largest and most detailed earthquake simulations of the southern San Andres fault. The TeraShake 1 code is based on a 4th order FD Anelastic Wave Propagation Model (AWM), developed by K. Olsen, using a kinematic source description. The enhanced TeraShake 2 then added a new physics-based dynamic component, with the new capability to very- large scale earthquake simulations. A high 100 m resolution was used to generate a physically realistic earthquake source description for the San Andreas fault. The executions of very-large scale TeraShake 2 simulations with the high-resolution dynamic source used up to 1024 processors on the TeraGrid, adding more than 60 TB of simulation output in the 168 TB SCEC digital library, managed by the SDSC Storage Resource Broker (SRB) at SDSC. The execution of these large simulations requires high levels of expertise and resource coordination. We examine the lessons learned in enabling the execution of the TeraShake application. In particular, we look at challenges imposed for the single-processor optimization of the application performance, optimization of the I/O handling and optimization of the run initialization, and the execution of the data-intensive simulations. The TeraShake code was optimized to improve scalability to 2048 processors, with a parallel efficiency of 84%. Our latest TeraShake simulation sustains 1 Teraflop/s performance, completing a simulation in less than 9 hours on the SDSC Datastar. This is more than 10 times faster than previous TeraShake simulations. Some of the TeraShake production simulations were carried out using grid computing resources, including the execution on NCSA TeraGrid resources, and run-time archiving outputs onto SDSC

  10. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    NASA Astrophysics Data System (ADS)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  11. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time. PMID:24136425

  12. SfM with MRFs: Discrete-Continuous Optimization for Large-Scale Structure from Motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2012-10-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time. PMID:23045369

  13. Optimization and large scale computation of an entropy-based moment closure

    SciTech Connect

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  14. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGESBeta

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used asmore » test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.« less

  15. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  16. Algorithmic enhancements and experience with a large scale SQP code for general nonlinear programming problems

    SciTech Connect

    Boggs, P.; Tolle, J.; Kearsley, A.

    1994-12-31

    We have developed a large scale sequential quadratic programming (SQP) code based on an interior-point method for solving general (convex or nonconvex) quadratic programs (QP). We often halt this QP solver prematurely by employing a trust-region strategy. This procedure typically reduces the overall cost of the code. In this talk we briefly review the algorithm and some of its theoretical justification and then discuss recent enhancements including automatic procedures for both increasing and decreasing the parameter in the merit function, a regularization procedure for dealing with linearly dependent active constraint gradients, and a method for modifying the linearized equality constraints. Some numerical results on a significant set of {open_quotes}real-world{close_quotes} problems will be presented.

  17. Assessment of economically optimal water management and geospatial potential for large-scale water storage

    NASA Astrophysics Data System (ADS)

    Weerasinghe, Harshi; Schneider, Uwe A.

    2010-05-01

    Assessment of economically optimal water management and geospatial potential for large-scale water storage Weerasinghe, Harshi; Schneider, Uwe A Water is an essential but limited and vulnerable resource for all socio-economic development and for maintaining healthy ecosystems. Water scarcity accelerated due to population expansion, improved living standards, and rapid growth in economic activities, has profound environmental and social implications. These include severe environmental degradation, declining groundwater levels, and increasing problems of water conflicts. Water scarcity is predicted to be one of the key factors limiting development in the 21st century. Climate scientists have projected spatial and temporal changes in precipitation and changes in the probability of intense floods and droughts in the future. As scarcity of accessible and usable water increases, demand for efficient water management and adaptation strategies increases as well. Addressing water scarcity requires an intersectoral and multidisciplinary approach in managing water resources. This would in return safeguard the social welfare and the economical benefit to be at their optimal balance without compromising the sustainability of ecosystems. This paper presents a geographically explicit method to assess the potential for water storage with reservoirs and a dynamic model that identifies the dimensions and material requirements under an economically optimal water management plan. The methodology is applied to the Elbe and Nile river basins. Input data for geospatial analysis at watershed level are taken from global data repositories and include data on elevation, rainfall, soil texture, soil depth, drainage, land use and land cover; which are then downscaled to 1km spatial resolution. Runoff potential for different combinations of land use and hydraulic soil groups and for mean annual precipitation levels are derived by the SCS-CN method. Using the overlay and decision tree algorithms

  18. Optimization of Cluster Heads for Energy Efficiency in Large-Scale Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Gu, Yi; Wu, Qishi

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting nodes in a pre-deployed sensor network to be the cluster heads to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k-means algorithm.

  19. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE PAGESBeta

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster headsmore » to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  20. The optimization of large-scale density gradient isolation of human islets.

    PubMed

    Robertson, G S; Chadwick, D R; Contractor, H; James, R F; London, N J

    1993-01-01

    The use of the COBE 2991 cell processor (COBE Laboratories, Colorado) for large-scale islet purification using discontinuous density gradients has been widely adopted. It minimizes many of the problems such as wall effects, normally encountered during centrifugation, and avoids the vortexing at interfaces that occurs during acceleration and deceleration by allowing the gradient to be formed and the islet-containing interface to be collected while continuing to spin. We have produced cross-sectional profiles of the 2991 bag during spinning which allow the area of interfaces in such step gradients to be calculated. This allows the volumes of the gradient media layers loaded on the machine to be adjusted in order to maximize the area of the gradient interfaces. However, even using the maximal areas possible (144.5 cm2), clogging of tissue at such interfaces limits the volume of digest which can be separated on one gradient to 15 ml. We have shown that a linear continuous density gradient can be produced within the 2991 bag, that allows as much as 40 ml of digest to be successfully purified. Such a system combines the intrinsic advantages of the 2991 with those of continuous density gradients and provides the optimal method for density-dependent islet purification. PMID:8219265

  1. Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    NASA Astrophysics Data System (ADS)

    Kitaura, F. S.; Enßlin, T. A.

    2008-09-01

    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.

  2. Characterizing the nonlinear growth of large-scale structure in the Universe

    PubMed

    Coles; Chiang

    2000-07-27

    The local Universe displays a rich hierarchical pattern of galaxy clusters and superclusters. The early Universe, however, was almost smooth, with only slight 'ripples' as seen in the cosmic microwave background radiation. Models of the evolution of cosmic structure link these observations through the effect of gravity, because the small initially overdense fluctuations are predicted to attract additional mass as the Universe expands. During the early stages of this expansion, the ripples evolve independently, like linear waves on the surface of deep water. As the structures grow in mass, they interact with each other in nonlinear ways, more like waves breaking in shallow water. We have recently shown how cosmic structure can be characterized by phase correlations associated with these nonlinear interactions, but it was not clear how to use that information to obtain quantitative insights into the growth of structures. Here we report a method of revealing phase information, and show quantitatively how this relates to the formation of filaments, sheets and clusters of galaxies by nonlinear collapse. We develop a statistical method based on information entropy to separate linear from nonlinear effects, and thereby are able to disentangle those aspects of galaxy clustering that arise from initial conditions (the ripples) from the subsequent dynamical evolution. PMID:10935627

  3. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  4. Library for Nonlinear Optimization

    Energy Science and Technology Software Center (ESTSC)

    2001-10-09

    OPT++ is a C++ object-oriented library for nonlinear optimization. This incorporates an improved implementation of an existing capability and two new algorithmic capabilities based on existing journal articles and freely available software.

  5. Adaptive fuzzy decentralised fault-tolerant control for nonlinear large-scale systems with actuator failures and unmodelled dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Yinyin; Tong, Shaocheng; Li, Yongming

    2015-09-01

    This paper discusses the adaptive fuzzy decentralised fault-tolerant control (FTC) problem for a class of nonlinear large-scale systems in strict-feedback form. The systems under study contain the unknown nonlinearities, unmodelled dynamics, actuator faults and without the direct measurements of state variables. With the help of fuzzy logic systems identifying the unknown functions and a fuzzy adaptive observer is designed to estimate the unmeasured states. By using the backstepping design technique and the dynamic surface control approach and combining with the changing supply function technique, a fuzzy adaptive FTC scheme is developed. The main features of the proposed control approach are that it can guarantee the closed-loop system to be input-to-state practically stable, and also has the robustness to the unmodelled dynamics. Moreover, it can overcome the so-called problem of 'explosion of complexity' existing in the previous literature. Finally, simulation studies are provided to illustrate the effectiveness of the proposed approach.

  6. Characteristic-based non-linear simulation of large-scale standing-wave thermoacoustic engine.

    PubMed

    Abd El-Rahman, Ahmed I; Abdel-Rahman, Ehab

    2014-08-01

    A few linear theories [Swift, J. Acoust. Soc. Am. 84(4), 1145-1180 (1988); Swift, J. Acoust. Soc. Am. 92(3), 1551-1563 (1992); Olson and Swift, J. Acoust. Soc. Am. 95(3), 1405-1412 (1994)] and numerical models, based on low-Mach number analysis [Worlikar and Knio, J. Comput. Phys. 127(2), 424-451 (1996); Worlikar et al., J. Comput. Phys. 144(2), 199-324 (1996); Hireche et al., Canadian Acoust. 36(3), 164-165 (2008)], describe the flow dynamics of standing-wave thermoacoustic engines, but almost no simulation results are available that enable the prediction of the behavior of practical engines experiencing significant temperature gradient between the stack ends and thus producing large-amplitude oscillations. Here, a one-dimensional non-linear numerical simulation based on the method of characteristics to solve the unsteady compressible Euler equations is reported. Formulation of the governing equations, implementation of the numerical method, and application of the appropriate boundary conditions are presented. The calculation uses explicit time integration along with deduced relationships, expressing the friction coefficient and the Stanton number for oscillating flow inside circular ducts. Helium, a mixture of Helium and Argon, and Neon are used for system operation at mean pressures of 13.8, 9.9, and 7.0 bars, respectively. The self-induced pressure oscillations are accurately captured in the time domain, and then transferred into the frequency domain, distinguishing the pressure signals into fundamental and harmonic responses. The results obtained are compared with reported experimental works [Swift, J. Acoust. Soc. Am. 92(3), 1551-1563 (1992); Olson and Swift, J. Acoust. Soc. Am. 95(3), 1405-1412 (1994)] and the linear theory, showing better agreement with the measured values, particularly in the non-linear regime of the dynamic pressure response. PMID:25096100

  7. Reduction of Large-scale Turbulence and Optimization of Flows in the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Taylor, N. Z.

    2011-10-01

    large-scale flow.

  8. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    NASA Astrophysics Data System (ADS)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to

  9. Dependency-tracking object-oriented multidisciplinary design optimization (MDO) formulation on a large-scale system

    NASA Astrophysics Data System (ADS)

    Ahlqvist, Maria Alexandra

    2001-12-01

    Advances in computer technology and analysis software are making optimization of engineering systems more attractive and affordable than ever before. Optimization is becoming a necessary tool in order for companies to stay competitive. While the concept of optimization has been known almost as long as mankind, specific procedures on how to optimize engineering systems are younger. Currently, efforts are being made to reduce the computational time and simplify the organizational complexity involved with solving multidisciplinary systems. The work presented in this dissertation deals with how an object-oriented, dependency-tracking, demand-driven language can be used in reducing the computational time in performing multidisciplinary design optimizations. The work also discusses how the object-oriented language was used in integrating optimization functionality with a missile design system. The object-oriented dependency-tracking demand-driven language is applied to a large-scale multidisciplinary missile system involving disciplines such as a geometry engine, weight analysis, propulsion, aerodynamics, trajectory analysis, and cost analysis. Also discussed is the need for using approximations in optimizing a large-scale system. Designed experiments and response surface techniques were employed in creating approximation models for the problem at hand. Using these approximations to evaluate the responses was found to be useful at points in the design space where one or more responses could not otherwise be evaluated. Different optimization schemes were studied including response surface analysis of different resolutions in conjunction with higher fidelity optimization and higher fidelity optimization without approximation models. The contributions of this work are the application of MDO capabilities to a large-scale missile design system modeled in an object-oriented dependency-tracking environment, the use of response surface approximations to fit areas in the design

  10. Modulational stability of weakly nonlinear wave-trains in media with small- and large-scale dispersions

    NASA Astrophysics Data System (ADS)

    Nikitenkova, S.; Singh, N.; Stepanyants, Y.

    2015-12-01

    In this paper, we revisit the problem of modulation stability of quasi-monochromatic wave-trains propagating in a media with the double dispersion occurring both at small and large wavenumbers. We start with the shallow-water equations derived by Shrira [Izv., Acad. Sci., USSR, Atmos. Ocean. Phys. (Engl. Transl.) 17, 55-59 (1981)] which describes both surface and internal long waves in a rotating fluid. The small-scale (Boussinesq-type) dispersion is assumed to be weak, whereas the large-scale (Coriolis-type) dispersion is considered as without any restriction. For unidirectional waves propagating in one direction, only the considered set of equations reduces to the Gardner-Ostrovsky equation which is applicable only within a finite range of wavenumbers. We derive the nonlinear Schrödinger equation (NLSE) which describes the evolution of narrow-band wave-trains and show that within a more general bi-directional equation the wave-trains, similar to that derived from the Ostrovsky equation, are also modulationally stable at relatively small wavenumbers k < kc and unstable at k > kc, where kc is some critical wavenumber. The NLSE derived here has a wider range of applicability: it is valid for arbitrarily small wavenumbers. We present the analysis of coefficients of the NLSE for different signs of coefficients of the governing equation and compare them with those derived from the Ostrovsky equation. The analysis shows that for weakly dispersive waves in the range of parameters where the Gardner-Ostrovsky equation is valid, the cubic nonlinearity does not contribute to the nonlinear coefficient of NLSE; therefore, the NLSE can be correctly derived from the Ostrovsky equation.

  11. Modulational stability of weakly nonlinear wave-trains in media with small- and large-scale dispersions.

    PubMed

    Nikitenkova, S; Singh, N; Stepanyants, Y

    2015-12-01

    In this paper, we revisit the problem of modulation stability of quasi-monochromatic wave-trains propagating in a media with the double dispersion occurring both at small and large wavenumbers. We start with the shallow-water equations derived by Shrira [Izv., Acad. Sci., USSR, Atmos. Ocean. Phys. (Engl. Transl.) 17, 55-59 (1981)] which describes both surface and internal long waves in a rotating fluid. The small-scale (Boussinesq-type) dispersion is assumed to be weak, whereas the large-scale (Coriolis-type) dispersion is considered as without any restriction. For unidirectional waves propagating in one direction, only the considered set of equations reduces to the Gardner-Ostrovsky equation which is applicable only within a finite range of wavenumbers. We derive the nonlinear Schrödinger equation (NLSE) which describes the evolution of narrow-band wave-trains and show that within a more general bi-directional equation the wave-trains, similar to that derived from the Ostrovsky equation, are also modulationally stable at relatively small wavenumbers k < kc and unstable at k > kc, where kc is some critical wavenumber. The NLSE derived here has a wider range of applicability: it is valid for arbitrarily small wavenumbers. We present the analysis of coefficients of the NLSE for different signs of coefficients of the governing equation and compare them with those derived from the Ostrovsky equation. The analysis shows that for weakly dispersive waves in the range of parameters where the Gardner-Ostrovsky equation is valid, the cubic nonlinearity does not contribute to the nonlinear coefficient of NLSE; therefore, the NLSE can be correctly derived from the Ostrovsky equation. PMID:26723152

  12. Optimization of large-scale mouse brain connectome via joint evaluation of DTI and neuron tracing data.

    PubMed

    Chen, Hanbo; Liu, Tao; Zhao, Yu; Zhang, Tuo; Li, Yujie; Li, Meng; Zhang, Hongmiao; Kuang, Hui; Guo, Lei; Tsien, Joe Z; Liu, Tianming

    2015-07-15

    Tractography based on diffusion tensor imaging (DTI) data has been used as a tool by a large number of recent studies to investigate structural connectome. Despite its great success in offering unique 3D neuroanatomy information, DTI is an indirect observation with limited resolution and accuracy and its reliability is still unclear. Thus, it is essential to answer this fundamental question: how reliable is DTI tractography in constructing large-scale connectome? To answer this question, we employed neuron tracing data of 1772 experiments on the mouse brain released by the Allen Mouse Brain Connectivity Atlas (AMCA) as the ground-truth to assess the performance of DTI tractography in inferring white matter fiber pathways and inter-regional connections. For the first time in the neuroimaging field, the performance of whole brain DTI tractography in constructing a large-scale connectome has been evaluated by comparison with tracing data. Our results suggested that only with the optimized tractography parameters and the appropriate scale of brain parcellation scheme, can DTI produce relatively reliable fiber pathways and a large-scale connectome. Meanwhile, a considerable amount of errors were also identified in optimized DTI tractography results, which we believe could be potentially alleviated by efforts in developing better DTI tractography approaches. In this scenario, our framework could serve as a reliable and quantitative test bed to identify errors in tractography results which will facilitate the development of such novel tractography algorithms and the selection of optimal parameters. PMID:25953631

  13. Fault diagnosis of nonlinear and large-scale processes using novel modified kernel Fisher discriminant analysis approach

    NASA Astrophysics Data System (ADS)

    Shi, Huaitao; Liu, Jianchang; Wu, Yuhou; Zhang, Ke; Zhang, Lixiu; Xue, Peng

    2016-04-01

    It is pretty significant for fault diagnosis timely and accurately to improve the dependability of industrial processes. In this study, fault diagnosis of nonlinear and large-scale processes by variable-weighted kernel Fisher discriminant analysis (KFDA) based on improved biogeography-based optimisation (IBBO) is proposed, referred to as IBBO-KFDA, where IBBO is used to determine the parameters of variable-weighted KFDA, and variable-weighted KFDA is used to solve the multi-classification overlapping problem. The main contributions of this work are four-fold to further improve the performance of KFDA for fault diagnosis. First, a nonlinear fault diagnosis approach with variable-weighted KFDA is developed for maximising separation between the overlapping fault samples. Second, kernel parameters and features selection of variable-weighted KFDA are simultaneously optimised using IBBO. Finally, a single fitness function that combines erroneous diagnosis rate with feature cost is created, a novel mixed kernel function is introduced to improve the classification capability in the feature space and diagnosis accuracy of the IBBO-KFDA, and serves as the target function in the optimisation problem. Moreover, an IBBO approach is developed to obtain the better quality of solution and faster convergence speed. On the one hand, the proposed IBBO-KFDA method is first used on Tennessee Eastman process benchmark data sets to validate the feasibility and efficiency. On the other hand, IBBO-KFDA is applied to diagnose faults of automation gauge control system. Simulation results demonstrate that IBBO-KFDA can obtain better kernel parameters and feature vectors with a lower computing cost, higher diagnosis accuracy and a better real-time capacity.

  14. Strategic optimization of large-scale vertical closed-loop shallow geothermal systems

    NASA Astrophysics Data System (ADS)

    Hecht-Méndez, J.; de Paly, M.; Beck, M.; Blum, P.; Bayer, P.

    2012-04-01

    Vertical closed-loop geothermal systems or ground source heat pump (GSHP) systems with multiple vertical borehole heat exchangers (BHEs) are attractive technologies that provide heating and cooling to large facilities such as hotels, schools, big office buildings or district heating systems. Currently, the worldwide number of installed systems shows a recurrent increase. By running arrays of multiple BHEs, the energy demand of a given facility is fulfilled by exchanging heat with the ground. Due to practical and technical reasons, square arrays of the BHEs are commonly used and the total energy extraction from the subsurface is accomplished by an equal operation of each BHE. Moreover, standard designing practices disregard the presence of groundwater flow. We present a simulation-optimization approach that is able to regulate the individual operation of multiple BHEs, depending on the given hydro-geothermal conditions. The developed approach optimizes the overall performance of the geothermal system while mitigating the environmental impact. As an example, a synthetic case with a geothermal system using 25 BHEs for supplying a seasonal heating energy demand is defined. The optimization approach is evaluated for finding optimal energy extractions for 15 scenarios with different specific constant groundwater flow velocities. Ground temperature development is simulated using the optimal energy extractions and contrasted against standard application. It is demonstrated that optimized systems always level the ground temperature distribution and generate smaller subsurface temperature changes than non-optimized ones. Mean underground temperature changes within the studied BHE field are between 13% and 24% smaller when the optimized system is used. By applying the optimized energy extraction patterns, the temperature of the heat carrier fluid in the BHE, which controls the overall performance of the system, can also be raised by more than 1 °C.

  15. Large-scale regionalization of water table depth in peatlands optimized for greenhouse gas emission upscaling

    NASA Astrophysics Data System (ADS)

    Bechtold, M.; Tiemeyer, B.; Laggner, A.; Leppelt, T.; Frahm, E.; Belting, S.

    2014-04-01

    Fluxes of the three main greenhouse gases (GHG) CO2, CH4 and N2O from peat and other organic soils are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new dataset comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip well specific long-term annual mean water level (WL) as well as a transformed form of it (WLt). The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insights into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and that predictors with

  16. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    NASA Astrophysics Data System (ADS)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  17. Large-Scale Multi-Objective Optimization for the Management of Seawater Intrusion, Santa Barbara, CA

    NASA Astrophysics Data System (ADS)

    Stanko, Z. P.; Nishikawa, T.; Paulinski, S. R.

    2015-12-01

    The City of Santa Barbara, located in coastal southern California, is concerned that excessive groundwater pumping will lead to chloride (Cl) contamination of its groundwater system from seawater intrusion (SWI). In addition, the city wishes to estimate the effect of continued pumping on the groundwater basin under a variety of initial and climatic conditions. A SEAWAT-based groundwater-flow and solute-transport model of the Santa Barbara groundwater basin was optimized to produce optimal pumping schedules assuming 5 different scenarios. Borg, a multi-objective genetic algorithm, was coupled with the SEAWAT model to identify optimal management strategies. The optimization problems were formulated as multi-objective so that the tradeoffs between maximizing pumping, minimizing SWI, and minimizing drawdowns can be examined by the city. Decisions can then be made on a pumping schedule in light of current preferences and climatic conditions. Borg was used to produce Pareto optimal results for all 5 scenarios, which vary in their initial conditions (high water levels, low water levels, or current basin state), simulated climate (normal or drought conditions), and problem formulation (objective equations and decision-variable aggregation). Results show mostly well-defined Pareto surfaces with a few singularities. Furthermore, the results identify the precise pumping schedule per well that was suitable given the desired restriction on drawdown and Cl concentrations. A system of decision-making is then possible based on various observations of the basin's hydrologic states and climatic trends without having to run any further optimizations. In addition, an assessment of selected Pareto-optimal solutions was analyzed with sensitivity information using the simulation model alone. A wide range of possible groundwater pumping scenarios is available and depends heavily on the future climate scenarios and the Pareto-optimal solution selected while managing the pumping wells.

  18. Large-scale regionalization of water table depth in peatlands optimized for greenhouse gas emission upscaling

    NASA Astrophysics Data System (ADS)

    Bechtold, M.; Tiemeyer, B.; Laggner, A.; Leppelt, T.; Frahm, E.; Belting, S.

    2014-09-01

    Fluxes of the three main greenhouse gases (GHG) CO2, CH4 and N2O from peat and other soils with high organic carbon contents are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new data set comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip-well-specific long-term annual mean water level (WL) as well as a transformed form (WLt). The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insight into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and

  19. Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization

    NASA Astrophysics Data System (ADS)

    Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar

    2016-07-01

    Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.

  20. Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.

  1. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGESBeta

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  2. Research on transformation and optimization of large scale 3D modeling for real time rendering

    NASA Astrophysics Data System (ADS)

    Yan, Hu; Yang, Yongchao; Zhao, Gang; He, Bin; Shen, Guosheng

    2011-12-01

    During the simulation process of real-time three-dimensional scene, the popular modeling software and the real-time rendering platform are not compatible. The common solution is to create three-dimensional scene model by using modeling software and then transform the format supported by rendering platform. This paper takes digital campus scene simulation as an example, analyzes and solves the problems of surface loss; texture distortion and loss; model flicker and so on during the transformation from 3Ds Max to MultiGen Creator. Besides, it proposes the optimization strategy of model which is transformed. The operation results show that this strategy is a good solution to all kinds of problems existing in transformation and it can speed up the rendering speed of the model.

  3. Numerical solution of nonlinear algebraic equations in stiff ODE solving (1986--89)---Quasi-Newton updating for large scale nonlinear systems (1989--90)

    SciTech Connect

    Walker, H.F.

    1990-01-01

    During the 1986--1989 project period, two major areas of research developed into which most of the work fell: matrix-free'' methods for solving linear systems, by which we mean iterative methods that require only the action of the coefficient matrix on vectors and not the coefficient matrix itself, and Newton-like methods for underdetermined nonlinear systems. In the 1990 project period of the renewal grant, a third major area of research developed: inexact Newton and Newton iterative methods and their applications to large-scale nonlinear systems, especially those arising in discretized problems. An inexact Newton method is any method in which each step reduces the norm of the local linear model of the function of interest. A Newton iterative method is any implementation of Newton's method in which the linear systems that characterize Newton steps (the Newton equations'') are solved only approximately using an iterative linear solver. Newton iterative methods are properly considered special cases of inexact Newton methods. We describe the work in these areas and in other areas in this paper.

  4. Weighted modularity optimization for crisp and fuzzy community detection in large-scale networks

    NASA Astrophysics Data System (ADS)

    Cao, Jie; Bu, Zhan; Gao, Guangliang; Tao, Haicheng

    2016-11-01

    Community detection is a classic and very difficult task in the field of complex network analysis, principally for its applications in domains such as social or biological networks analysis. One of the most widely used technologies for community detection in networks is the maximization of the quality function known as modularity. However, existing work has proved that modularity maximization algorithms for community detection may fail to resolve communities in small size. Here we present a new community detection method, which is able to find crisp and fuzzy communities in undirected and unweighted networks by maximizing weighted modularity. The algorithm derives new edge weights using the cosine similarity in order to go around the resolution limit problem. Then a new local moving heuristic based on weighted modularity optimization is proposed to cluster the updated network. Finally, the set of potentially attractive clusters for each node is computed, to further uncover the crisply fuzzy partition of the network. We give demonstrative applications of the algorithm to a set of synthetic benchmark networks and six real-world networks and find that it outperforms the current state of the art proposals (even those aimed at finding overlapping communities) in terms of quality and scalability.

  5. Large scale optimization of beam weights under dose-volume restrictions.

    PubMed

    Langer, M; Brown, R; Urie, M; Leong, J; Stracher, M; Shapiro, J

    1990-04-01

    The problem of choosing weights for beams in a multifield plan which maximizes tumor dose under conditions that recognize the volume dependence of organ tolerance to radiation is considered, and its solution described. Structures are modelled as collections of discrete points, and the weighting problem described as a combinatorial linear program (LP). The combinatorial LP is solved as a mixed 0/1 integer program with appropriate restrictions on normal tissue dose. The method is illustrated through the assignment of weights to a set of 10 beams incident on a pelvic target. Dose-volume restrictions are placed on surrounding bowel, bladder, and rectum, and a limit placed on tumor dose inhomogeneity. Different tolerance restrictions are examined, so that the sensitivity of the target dose to changes in the normal tissue constraints may be explored. It is shown that the distributions obtained satisfy the posed constraints. The technique permits formal solution of the optimization problem, in a time short enough to meet the needs of treatment planners. PMID:2323977

  6. Optimization of culture media for large-scale lutein production by heterotrophic Chlorella vulgaris.

    PubMed

    Jeon, Jin Young; Kwon, Ji-Sue; Kang, Soon Tae; Kim, Bo-Ra; Jung, Yuchul; Han, Jae Gap; Park, Joon Hyun; Hwang, Jae Kwan

    2014-01-01

    Lutein is a carotenoid with a purported role in protecting eyes from oxidative stress, particularly the high-energy photons of blue light. Statistical optimization was performed to growth media that supports a higher production of lutein by heterotrophically cultivated Chlorella vulgaris. The effect of media composition of C. vulgaris on lutein was examined using fractional factorial design (FFD) and central composite design (CCD). The results indicated that the presence of magnesium sulfate, EDTA-2Na, and trace metal solution significantly affected lutein production. The optimum concentrations for lutein production were found to be 0.34 g/L, 0.06 g/L, and 0.4 mL/L for MgSO4 ·7H2 O, EDTA-2Na, and trace metal solution, respectively. These values were validated using a 5-L jar fermenter. Lutein concentration was increased by almost 80% (139.64 ± 12.88 mg/L to 252.75 ± 12.92 mg/L) after 4 days. Moreover, the lutein concentration was not reduced as the cultivation was scaled up to 25,000 L (260.55 ± 3.23 mg/L) and 240,000 L (263.13 ± 2.72 mg/L). These observations suggest C. vulgaris as a potential lutein source. PMID:24550199

  7. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    NASA Astrophysics Data System (ADS)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  8. A LARGE-SCALE SIMULATION OF INTERNATIONAL MARITIME CONTAINER SHIPPING CONSIDERING OPTIMAL BEHAVIOR OF SHIPPERS AND OCEANGOING CARRIERS

    NASA Astrophysics Data System (ADS)

    Shibasaki, Ryuichi; Watanabe, Tomihiro; Ieda, Hitoshi

    This paper develops a large-scale simulation model of international maritime container shipping industry considering optimal behaviors of both shippers and oceangoing carriers, in order to measure impact of port and international logistics policies for each country including Japan. Concretely, the authors develop a short-term model (income maximization model of carriers) including shippers' choice of carrier when maritime cargo shipping demand between ports are given and a mid-term model (Nash equilibrium model of shippers and carriers) including shippers' choice of import/export port and route of hinterland transport and carriers' profit maximization behavior when cargo shipping demand between regions are given. The developed model is applied to the actual large-scale international maritime container shipping network in Eastern Asia. From a trial calculation based on the actual cargo shipping demand, the performance of the model is validated in terms of convergency and reproducibility. Also, the sensitivity of the model output taking the actual port policies into account is confirmed.

  9. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network

  10. Optimization of Large-Scale Culture Conditions for the Production of Cordycepin with Cordyceps militaris by Liquid Static Culture

    PubMed Central

    Kang, Chao; Wen, Ting-Chi; Kang, Ji-Chuan; Meng, Ze-Bing; Li, Guang-Rong; Hyde, Kevin D.

    2014-01-01

    Cordycepin is one of the most important bioactive compounds produced by species of Cordyceps sensu lato, but it is hard to produce large amounts of this substance in industrial production. In this work, single factor design, Plackett-Burman design, and central composite design were employed to establish the key factors and identify optimal culture conditions which improved cordycepin production. Using these culture conditions, a maximum production of cordycepin was 2008.48 mg/L for 700 mL working volume in the 1000 mL glass jars and total content of cordycepin reached 1405.94 mg/bottle. This method provides an effective way for increasing the cordycepin production at a large scale. The strategies used in this study could have a wide application in other fermentation processes. PMID:25054182

  11. Optimization of large-scale culture conditions for the production of cordycepin with Cordyceps militaris by liquid static culture.

    PubMed

    Kang, Chao; Wen, Ting-Chi; Kang, Ji-Chuan; Meng, Ze-Bing; Li, Guang-Rong; Hyde, Kevin D

    2014-01-01

    Cordycepin is one of the most important bioactive compounds produced by species of Cordyceps sensu lato, but it is hard to produce large amounts of this substance in industrial production. In this work, single factor design, Plackett-Burman design, and central composite design were employed to establish the key factors and identify optimal culture conditions which improved cordycepin production. Using these culture conditions, a maximum production of cordycepin was 2008.48 mg/L for 700 mL working volume in the 1000 mL glass jars and total content of cordycepin reached 1405.94 mg/bottle. This method provides an effective way for increasing the cordycepin production at a large scale. The strategies used in this study could have a wide application in other fermentation processes. PMID:25054182

  12. Assimilation of satellite data to optimize large-scale hydrological model parameters: a case study for the SWOT mission

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.

    2014-11-01

    During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 x 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large-scale river routing models. The method consists in applying a data assimilation approach, the extended Kalman filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA (Interactions between Soil, Biosphere, and Atmosphere)-TRIP (Total Runoff Integrating Pathways) continental hydrologic system. Parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which leads to significant errors at reach and large scales. The current study focuses on the Niger Basin, a transboundary river. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an observing system simulation experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning coefficients are then supposed to be known and are used to generate synthetic SWOT observations over the period 2002-2003. The impact of the assimilation system on the Niger Basin hydrological cycle is then quantified. The optimization of the Manning coefficient using the EKF (extended Kalman filter) algorithm over an 18-month period led to a significant improvement of the river water levels. The relative bias of the water level is globally improved (a 30

  13. Optimized circulation and weather type classifications relating large-scale atmospheric conditions to local PM10 concentrations in Bavaria

    NASA Astrophysics Data System (ADS)

    Weitnauer, C.; Beck, C.; Jacobeit, J.

    2013-12-01

    In the last decades the critical increase of the emission of air pollutants like nitrogen dioxide, sulfur oxides and particulate matter especially in urban areas has become a problem for the environment as well as human health. Several studies confirm a risk of high concentration episodes of particulate matter with an aerodynamic diameter < 10 μm (PM10) for the respiratory tract or cardiovascular diseases. Furthermore it is known that local meteorological and large scale atmospheric conditions are important influencing factors on local PM10 concentrations. With climate changing rapidly, these connections need to be better understood in order to provide estimates of climate change related consequences for air quality management purposes. For quantifying the link between large-scale atmospheric conditions and local PM10 concentrations circulation- and weather type classifications are used in a number of studies by using different statistical approaches. Thus far only few systematic attempts have been made to modify consisting or to develop new weather- and circulation type classifications in order to improve their ability to resolve local PM10 concentrations. In this contribution existing weather- and circulation type classifications, performed on daily 2.5 x 2.5 gridded parameters of the NCEP/NCAR reanalysis data set, are optimized with regard to their discriminative power for local PM10 concentrations at 49 Bavarian measurement sites for the period 1980 to 2011. Most of the PM10 stations are situated in urban areas covering urban background, traffic and industry related pollution regimes. The range of regimes is extended by a few rural background stations. To characterize the correspondence between the PM10 measurements of the different stations by spatial patterns, a regionalization by an s-mode principal component analysis is realized on the high-pass filtered data. The optimization of the circulation- and weather types is implemented using two representative

  14. Analysis of the electricity demand of Greece for optimal planning of a large-scale hybrid renewable energy system

    NASA Astrophysics Data System (ADS)

    Tyralis, Hristos; Karakatsanis, Georgios; Tzouka, Katerina; Mamassis, Nikos

    2015-04-01

    The Greek electricity system is examined for the period 2002-2014. The demand load data are analysed at various time scales (hourly, daily, seasonal and annual) and they are related to the mean daily temperature and the gross domestic product (GDP) of Greece for the same time period. The prediction of energy demand, a product of the Greek Independent Power Transmission Operator, is also compared with the demand load. Interesting results about the change of the electricity demand scheme after the year 2010 are derived. This change is related to the decrease of the GDP, during the period 2010-2014. The results of the analysis will be used in the development of an energy forecasting system which will be a part of a framework for optimal planning of a large-scale hybrid renewable energy system in which hydropower plays the dominant role. Acknowledgement: This research was funded by the Greek General Secretariat for Research and Technology through the research project Combined REnewable Systems for Sustainable ENergy DevelOpment (CRESSENDO; grant number 5145)

  15. Assessing Impact of Large-Scale Distributed Residential HVAC Control Optimization on Electricity Grid Operation and Renewable Energy Integration

    NASA Astrophysics Data System (ADS)

    Corbin, Charles D.

    Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite a growing interest in residential demand management. This work explores the potential for residential buildings to shape electric demand at the distribution feeder level in order to reduce peak demand, reduce system ramping, and increase load factor using detailed sub-hourly simulations of thousands of buildings coupled to distribution power flow software. More generally, this work develops a methodology for the directed optimization of residential HVAC operation using a distributed but directed MPC scheme that can be applied to today's programmable thermostat technologies to address the increasing variability in electric supply and demand. Case studies incorporating varying levels of renewable energy generation demonstrate the approach and highlight important considerations for large-scale residential model predictive control.

  16. Optimization of a large-scale gene disruption protocol in Dictyostelium and analysis of conserved genes of unknown function

    PubMed Central

    Torija, Patricia; Robles, Alicia; Escalante, Ricardo

    2006-01-01

    Background Development of the post-genomic age in Dictyostelium will require the existence of rapid and reliable methods to disrupt genes that would allow the analysis of entire gene families and perhaps the possibility to undertake the complete knock-out analysis of all the protein-coding genes present in Dictyostelium genome. Results Here we present an optimized protocol based on the previously described construction of gene disruption vectors by in vitro transposition. Our method allows a rapid selection of the construct by a simple PCR approach and subsequent sequencing. Disruption constructs were amplified by PCR and the products were directly transformed in Dictyostelium cells. The selection of homologous recombination events was also performed by PCR. We have constructed 41 disruption vectors to target genes of unknown function, highly conserved between Dictyostelium and human, but absent from the genomes of S. cerevisiae and S. pombe. 28 genes were successfully disrupted. Conclusion This is the first step towards the understanding of the function of these conserved genes and exemplifies the easiness to undertake large-scale disruption analysis in Dictyostelium. PMID:16945142

  17. Dynamic multi-swarm particle swarm optimizer using parallel PC cluster systems for global optimization of large-scale multimodal functions

    NASA Astrophysics Data System (ADS)

    Fan, Shu-Kai S.; Chang, Ju-Ming

    2010-05-01

    This article presents a novel parallel multi-swarm optimization (PMSO) algorithm with the aim of enhancing the search ability of standard single-swarm PSOs for global optimization of very large-scale multimodal functions. Different from the existing multi-swarm structures, the multiple swarms work in parallel, and the search space is partitioned evenly and dynamically assigned in a weighted manner via the roulette wheel selection (RWS) mechanism. This parallel, distributed framework of the PMSO algorithm is developed based on a master-slave paradigm, which is implemented on a cluster of PCs using message passing interface (MPI) for information interchange among swarms. The PMSO algorithm handles multiple swarms simultaneously and each swarm performs PSO operations of its own independently. In particular, one swarm is designated for global search and the others are for local search. The first part of the experimental comparison is made among the PMSO, standard PSO, and two state-of-the-art algorithms (CTSS and CLPSO) in terms of various un-rotated and rotated benchmark functions taken from the literature. In the second part, the proposed multi-swarm algorithm is tested on large-scale multimodal benchmark functions up to 300 dimensions. The results of the PMSO algorithm show great promise in solving high-dimensional problems.

  18. Experimental validation of computational models for large-scale nonlinear ultrasound simulations in heterogeneous, absorbing fluid media

    NASA Astrophysics Data System (ADS)

    Martin, Elly; Treeby, Bradley E.

    2015-10-01

    To increase the effectiveness of high intensity focused ultrasound (HIFU) treatments, prediction of ultrasound propagation in biological tissues is essential, particularly where bones are present in the field. This requires complex full-wave computational models which account for nonlinearity, absorption, and heterogeneity. These models must be properly validated but there is a lack of analytical solutions which apply in these conditions. Experimental validation of the models is therefore essential. However, accurate measurement of HIFU fields is not trivial. Our aim is to establish rigorous methods for obtaining reference data sets with which to validate tissue realistic simulations of ultrasound propagation. Here, we present preliminary measurements which form an initial validation of simulations performed using the k-Wave MATLAB toolbox. Acoustic pressure was measured on a plane in the field of a focused ultrasound transducer in free field conditions to be used as a Dirichlet boundary condition for simulations. Rectangular and wedge shaped olive oil scatterers were placed in the field and further pressure measurements were made in the far field for comparison with simulations. Good qualitative agreement was observed between the measured and simulated nonlinear pressure fields.

  19. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  20. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  1. Assimilation of satellite data to optimize large scale hydrological model parameters: a case study for the SWOT mission

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.

    2014-04-01

    During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 m × 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large scale river routing models which are typically employed in Land Surface Models (LSM) for global scale applications. The method consists in applying a data assimilation approach, the Extended Kalman Filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA-TRIP Continental Hydrologic System. Indeed, parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which might have locally significant errors. The current study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries. In addition, geopolitical issues in this region can restrict the exchange of hydrological data, so that SWOT should help improve this situation by making hydrological data freely available. In a previous study, the model was first evaluated against in-situ and satellite derived data sets within the framework of the international African Monsoon Multi-disciplinary Analysis (AMMA) project. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an Observing System Simulation Experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning

  2. NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF A CORONAL MAGNETIC FLUX ROPE SUPPORTING A LARGE-SCALE SOLAR FILAMENT FROM A PHOTOSPHERIC VECTOR MAGNETOGRAM

    SciTech Connect

    Jiang, Chaowei; Wu, S. T.; Hu, Qiang; Feng, Xueshang E-mail: wus@uah.edu E-mail: fengx@spaceweather.ac.cn

    2014-05-10

    Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength ≲ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.

  3. Nonlinear Force-free Field Extrapolation of a Coronal Magnetic Flux Rope Supporting a Large-scale Solar Filament from a Photospheric Vector Magnetogram

    NASA Astrophysics Data System (ADS)

    Jiang, Chaowei; Wu, S. T.; Feng, Xueshang; Hu, Qiang

    2014-05-01

    Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength <~ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.

  4. Optimization of a Fluorescence-Based Assay for Large-Scale Drug Screening against Babesia and Theileria Parasites

    PubMed Central

    Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo

    2015-01-01

    A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses. PMID:25915529

  5. Optimization of a Fluorescence-Based Assay for Large-Scale Drug Screening against Babesia and Theileria Parasites.

    PubMed

    Rizk, Mohamed Abdo; El-Sayed, Shimaa Abd El-Salam; Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo

    2015-01-01

    A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses. PMID:25915529

  6. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    SciTech Connect

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.

  7. Large-scale tracking and classification for automatic analysis of cell migration and proliferation, and experimental optimization of high-throughput screens of neuroblastoma cells.

    PubMed

    Harder, Nathalie; Batra, Richa; Diessl, Nicolle; Gogolin, Sina; Eils, Roland; Westermann, Frank; König, Rainer; Rohr, Karl

    2015-06-01

    Computational approaches for automatic analysis of image-based high-throughput and high-content screens are gaining increased importance to cope with the large amounts of data generated by automated microscopy systems. Typically, automatic image analysis is used to extract phenotypic information once all images of a screen have been acquired. However, also in earlier stages of large-scale experiments image analysis is important, in particular, to support and accelerate the tedious and time-consuming optimization of the experimental conditions and technical settings. We here present a novel approach for automatic, large-scale analysis and experimental optimization with application to a screen on neuroblastoma cell lines. Our approach consists of cell segmentation, tracking, feature extraction, classification, and model-based error correction. The approach can be used for experimental optimization by extracting quantitative information which allows experimentalists to optimally choose and to verify the experimental parameters. This involves systematically studying the global cell movement and proliferation behavior. Moreover, we performed a comprehensive phenotypic analysis of a large-scale neuroblastoma screen including the detection of rare division events such as multi-polar divisions. Major challenges of the analyzed high-throughput data are the relatively low spatio-temporal resolution in conjunction with densely growing cells as well as the high variability of the data. To account for the data variability we optimized feature extraction and classification, and introduced a gray value normalization technique as well as a novel approach for automatic model-based correction of classification errors. In total, we analyzed 4,400 real image sequences, covering observation periods of around 120 h each. We performed an extensive quantitative evaluation, which showed that our approach yields high accuracies of 92.2% for segmentation, 98.2% for tracking, and 86.5% for

  8. Understanding Uncertainties in Non-Linear Population Trajectories: A Bayesian Semi-Parametric Hierarchical Approach to Large-Scale Surveys of Coral Cover

    PubMed Central

    Vercelloni, Julie; Caley, M. Julian; Kayal, Mohsen; Low-Choy, Samantha; Mengersen, Kerrie

    2014-01-01

    Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making. PMID:25364915

  9. A Framework for Parallel Nonlinear Optimization by Partitioning Localized Constraints

    SciTech Connect

    Xu, You; Chen, Yixin

    2008-06-28

    We present a novel parallel framework for solving large-scale continuous nonlinear optimization problems based on constraint partitioning. The framework distributes constraints and variables to parallel processors and uses an existing solver to handle the partitioned subproblems. In contrast to most previous decomposition methods that require either separability or convexity of constraints, our approach is based on a new constraint partitioning theory and can handle nonconvex problems with inseparable global constraints. We also propose a hypergraph partitioning method to recognize the problem structure. Experimental results show that the proposed parallel algorithm can efficiently solve some difficult test cases.

  10. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  11. A Large Scale (N=400) Investigation of Gray Matter Differences in Schizophrenia Using Optimized Voxel-based Morphometry

    PubMed Central

    Meda, Shashwath A.; Giuliani, Nicole R.; Calhoun, Vince D.; Jagannathan, Kanchana; Schretlen, David J.; Pulver, Anne; Cascella, Nicola; Keshavan, Matcheri; Kates, Wendy; Buchanan, Robert; Sharma, Tonmoy; Pearlson, Godfrey D.

    2008-01-01

    Background Many studies have employed voxel-based morphometry (VBM) of MRI images as an automated method of investigating cortical gray matter differences in schizophrenia. However, results from these studies vary widely, likely due to different methodological or statistical approaches. Objective To use VBM to investigate gray matter differences in schizophrenia in a sample significantly larger than any published to date, and to increase statistical power sufficiently to reveal differences missed in smaller analyses. Methods Magnetic resonance whole brain images were acquired from four geographic sites, all using the same model 1.5T scanner and software version, and combined to form a sample of 200 patients with both first episode and chronic schizophrenia and 200 healthy controls, matched for age, gender and scanner location. Gray matter concentration was assessed and compared using optimized VBM. Results Compared to the healthy controls, schizophrenia patients showed significantly less gray matter concentration in multiple cortical and subcortical regions, some previously unreported. Overall, we found lower concentrations of gray matter in regions identified in prior studies, most of which reported only subsets of the affected areas. Conclusions Gray matter differences in schizophrenia are most comprehensively elucidated using a large, diverse and representative sample. PMID:18378428

  12. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  13. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  14. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE PAGESBeta

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; Liu, Yilu

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  15. Solving nonlinear equality constrained multiobjective optimization problems using neural networks.

    PubMed

    Mestari, Mohammed; Benzirar, Mohammed; Saber, Nadia; Khouil, Meryem

    2015-10-01

    This paper develops a neural network architecture and a new processing method for solving in real time, the nonlinear equality constrained multiobjective optimization problem (NECMOP), where several nonlinear objective functions must be optimized in a conflicting situation. In this processing method, the NECMOP is converted to an equivalent scalar optimization problem (SOP). The SOP is then decomposed into several-separable subproblems processable in parallel and in a reasonable time by multiplexing switched capacitor circuits. The approach which we propose makes use of a decomposition-coordination principle that allows nonlinearity to be treated at a local level and where coordination is achieved through the use of Lagrange multipliers. The modularity and the regularity of the neural networks architecture herein proposed make it suitable for very large scale integration implementation. An application to the resolution of a physical problem is given to show that the approach used here possesses some advantages of the point of algorithmic view, and provides processes of resolution often simpler than the usual techniques. PMID:25647664

  16. Numerical solution of nonlinear algebraic equations in stiff ODE solving (1986--89)---Quasi-Newton updating for large scale nonlinear systems (1989--90). Final report, 1986--1990

    SciTech Connect

    Walker, H.F.

    1990-12-31

    During the 1986--1989 project period, two major areas of research developed into which most of the work fell: ``matrix-free`` methods for solving linear systems, by which we mean iterative methods that require only the action of the coefficient matrix on vectors and not the coefficient matrix itself, and Newton-like methods for underdetermined nonlinear systems. In the 1990 project period of the renewal grant, a third major area of research developed: inexact Newton and Newton iterative methods and their applications to large-scale nonlinear systems, especially those arising in discretized problems. An inexact Newton method is any method in which each step reduces the norm of the local linear model of the function of interest. A Newton iterative method is any implementation of Newton`s method in which the linear systems that characterize Newton steps (the ``Newton equations``) are solved only approximately using an iterative linear solver. Newton iterative methods are properly considered special cases of inexact Newton methods. We describe the work in these areas and in other areas in this paper.

  17. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  18. Impact of ultrasound on solid-liquid extraction of phenolic compounds from maritime pine sawdust waste. Kinetics, optimization and large scale experiments.

    PubMed

    Meullemiestre, A; Petitcolas, E; Maache-Rezzoug, Z; Chemat, F; Rezzoug, S A

    2016-01-01

    Maritime pine sawdust, a by-product from industry of wood transformation, has been investigated as a potential source of polyphenols which were extracted by ultrasound-assisted maceration (UAM). UAM was optimized for enhancing extraction efficiency of polyphenols and reducing time-consuming. In a first time, a preliminary study was carried out to optimize the solid/liquid ratio (6g of dry material per mL) and the particle size (0.26 cm(2)) by conventional maceration (CVM). Under these conditions, the optimum conditions for polyphenols extraction by UAM, obtained by response surface methodology, were 0.67 W/cm(2) for the ultrasonic intensity (UI), 40°C for the processing temperature (T) and 43 min for the sonication time (t). UAM was compared with CVM, the results showed that the quantity of polyphenols was improved by 40% (342.4 and 233.5mg of catechin equivalent per 100g of dry basis, respectively for UAM and CVM). A multistage cross-current extraction procedure allowed evaluating the real impact of UAM on the solid-liquid extraction enhancement. The potential industrialization of this procedure was implemented through a transition from a lab sonicated reactor (3 L) to a large scale one with 30 L volume. PMID:26384903

  19. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  20. Nonlinear optimization for stochastic simulations.

    SciTech Connect

    Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.

    2003-12-01

    This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.

  1. Study of hybrid methods for approximating the Edgeworth-Pareto hull in nonlinear multicriteria optimization problems

    NASA Astrophysics Data System (ADS)

    Berezkin, V. E.; Lotov, A. V.; Lotova, E. A.

    2014-06-01

    Methods for approximating the Edgeworth-Pareto hull (EPH) of the set of feasible criteria vectors in nonlinear multicriteria optimization problems are examined. The relative efficiency of two EPH approximation methods based on classical methods of searching for local extrema of convolutions of criteria is experimentally studied for a large-scale applied problem (with several hundred variables). A hybrid EPH approximation method combining classical and genetic approximation methods is considered.

  2. Particle swarm optimization for complex nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos

    2016-06-01

    This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.

  3. Optimization of nonlinear aeroelastic tailoring criteria

    NASA Technical Reports Server (NTRS)

    Abdi, F.; Ide, H.; Shankar, V. J.; Sobieszczanski-Sobieski, J.

    1988-01-01

    A static flexible fighter aircraft wing configuration is presently addressed by a multilevel optimization technique, based on both a full-potential concept and a rapid structural optimization program, which can be applied to such aircraft-design problems as maneuver load control, aileron reversal, and lift effectiveness. It is found that nonlinearities are important in the design of an aircraft whose flight envelope encompasses the transonic regime, and that the present structural suboptimization produces a significantly lighter wing by reducing ply thicknesses.

  4. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  5. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  6. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  7. Optimization approaches to nonlinear model predictive control

    SciTech Connect

    Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)

    1991-01-01

    With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.

  8. Optimal singular control for nonlinear semistabilisation

    NASA Astrophysics Data System (ADS)

    L'Afflitto, Andrea; Haddad, Wassim M.

    2016-06-01

    The singular optimal control problem for asymptotic stabilisation has been extensively studied in the literature. In this paper, the optimal singular control problem is extended to address a weaker version of closed-loop stability, namely, semistability, which is of paramount importance for consensus control of network dynamical systems. Three approaches are presented to address the nonlinear semistable singular control problem. Namely, a singular perturbation method is presented to construct a state-feedback singular controller that guarantees closed-loop semistability for nonlinear systems. In this approach, we show that for a non-negative cost-to-go function the minimum cost of a nonlinear semistabilising singular controller is lower than the minimum cost of a singular controller that guarantees asymptotic stability of the closed-loop system. In the second approach, we solve the nonlinear semistable singular control problem by using the cost-to-go function to cancel the singularities in the corresponding Hamilton-Jacobi-Bellman equation. For this case, we show that the minimum value of the singular performance measure is zero. Finally, we provide a framework based on the concepts of state-feedback linearisation and feedback equivalence to solve the singular control problem for semistabilisation of nonlinear dynamical systems. For this approach, we also show that the minimum value of the singular performance measure is zero. Three numerical examples are presented to demonstrate the efficacy of the proposed singular semistabilisation frameworks.

  9. Optimal Parametric Feedback Excitation of Nonlinear Oscillators

    NASA Astrophysics Data System (ADS)

    Braun, David J.

    2016-01-01

    An optimal parametric feedback excitation principle is sought, found, and investigated. The principle is shown to provide an adaptive resonance condition that enables unprecedentedly robust movement generation in a large class of oscillatory dynamical systems. Experimental demonstration of the theory is provided by a nonlinear electronic circuit that realizes self-adaptive parametric excitation without model information, signal processing, and control computation. The observed behavior dramatically differs from the one achievable using classical parametric modulation, which is fundamentally limited by uncertainties in model information and nonlinear effects inevitably present in real world applications.

  10. Optimal Parametric Feedback Excitation of Nonlinear Oscillators.

    PubMed

    Braun, David J

    2016-01-29

    An optimal parametric feedback excitation principle is sought, found, and investigated. The principle is shown to provide an adaptive resonance condition that enables unprecedentedly robust movement generation in a large class of oscillatory dynamical systems. Experimental demonstration of the theory is provided by a nonlinear electronic circuit that realizes self-adaptive parametric excitation without model information, signal processing, and control computation. The observed behavior dramatically differs from the one achievable using classical parametric modulation, which is fundamentally limited by uncertainties in model information and nonlinear effects inevitably present in real world applications. PMID:26871336

  11. The role of large-scale eddies in the nonlinear equilibration of a multi-level model of the mid-latitude troposphere

    NASA Astrophysics Data System (ADS)

    Solomon, Amy Beth

    (1997). A three dimensional time dependent linear stability analysis is used to demonstrate that the equilibrated climate is stable to linear perturbations. These results are contrasted with the results of a one dimensional stability analysis to show the sensitivity of these results to the treatment of the meridional structure of the eddies. The feedbacks which maintain the static stability are shown to play a significant role in the homogenization of the potential vorticity above the atmospheric boundary layer (ABL). These feedbacks are also shown to couple the dynamics within the ABL with the upper troposphere in a study of the sensitivity of the vertical structure of the large scale eddies to changes in the radiative equilibrium temperature gradients. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  12. Nonlinear Brightness Optimization in Compton Scattering

    DOE PAGESBeta

    Hartemann, Fred V.; Wu, Sheldon S. Q.

    2013-07-26

    In Compton scattering light sources, a laser pulse is scattered by a relativistic electron beam to generate tunable x and gamma rays. Because of the inhomogeneous nature of the incident radiation, the relativistic Lorentz boost of the electrons is modulated by the ponderomotive force during the interaction, leading to intrinsic spectral broadening and brightness limitations. We discuss these effects, along with an optimization strategy to properly balance the laser bandwidth, diffraction, and nonlinear ponderomotive force.

  13. Nonlinear brightness optimization in compton scattering.

    PubMed

    Hartemann, Fred V; Wu, Sheldon S Q

    2013-07-26

    In Compton scattering light sources, a laser pulse is scattered by a relativistic electron beam to generate tunable x and gamma rays. Because of the inhomogeneous nature of the incident radiation, the relativistic Lorentz boost of the electrons is modulated by the ponderomotive force during the interaction, leading to intrinsic spectral broadening and brightness limitations. These effects are discussed, along with an optimization strategy to properly balance the laser bandwidth, diffraction, and nonlinear ponderomotive force. PMID:23931374

  14. Global mantle flow at ultra-high resolution: The competing influence of faulted plate margins, the strength of bending plates, and large-scale, nonlinear flow

    NASA Astrophysics Data System (ADS)

    Alisic, L.; Gurnis, M.; Stadler, G.; Burstedde, C.; Wilcox, L. C.; Ghattas, O.

    2009-12-01

    A full understanding of the dynamics of plate motions requires numerical models with a realistic, nonlinear rheology and a mesh resolution sufficiently high to resolve large variations in viscosity over short length scales. We suspect that resolutions as fine as 1 km locally in global models of the whole mantle and lithosphere are necessary. We use the adaptive mesh mantle convection code Rhea to model convection in the mantle with plates in both regional and global domains. Rhea is a new generation parallel finite element mantle convection code designed to scale to hundreds of thousands of compute cores. It uses forest-of-octree-based adaptive meshes via the p4est library. With Rhea's adaptive capabilities we can create local resolution down to ~ 1 km around plate boundaries, while keeping the mesh at a much coarser resolution away from small features. The global models in this study have approximately 160 million elements, a reduction of ~ 2000x compared to a uniform mesh of the same high resolution. The unprecedented resolution in these global models allows us, for the first time, to resolve viscous dissipation in the bending plate as well as observe the trade-off between this process and the strength of slabs and the resistance of dipping thrust faults. Since plate velocities and 'plateness' are dynamic outcomes of numerical modeling, we must carefully incorporate both the full buoyancy field and the details of all plate boundaries at a fine scale. The global models were constructed with detailed maps of the age of the plates and a thermal model of the seismicity-defined slabs which grades into the more diffuse buoyancy resolved with tomography. In the regional models, the thermal model consists of plates following a halfspace cooling model, and slabs for which buoyancy is conserved at every depth. A composite formulation of Newtonian and non-Newtonian rheology along with yielding is implemented; plate boundaries are modeled as very narrow weak zones. Plate

  15. Traveltime tomography and nonlinear constrained optimization

    SciTech Connect

    Berryman, J.G.

    1988-10-01

    Fermat's principle of least traveltime states that the first arrivals follow ray paths with the smallest overall traveltime from the point of transmission to the point of reception. This principle determines a definite convex set of feasible slowness models - depending only on the traveltime data - for the fully nonlinear traveltime inversion problem. The existence of such a convex set allows us to transform the inversion problem into a nonlinear constrained optimization problem. Fermat's principle also shows that the standard undamped least-squares solution to the inversion problem always produces a slowness model with many ray paths having traveltime shorter than the measured traveltime (an impossibility even if the trial ray paths are not the true ray paths). In a damped least-squares inversion, the damping parameter may be varied to allow efficient location of a slowness model on the feasibility boundary. 13 refs., 1 fig., 1 tab.

  16. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version

  17. Nonlinear simulations to optimize magnetic nanoparticle hyperthermia

    SciTech Connect

    Reeves, Daniel B. Weaver, John B.

    2014-03-10

    Magnetic nanoparticle hyperthermia is an attractive emerging cancer treatment, but the acting microscopic energy deposition mechanisms are not well understood and optimization suffers. We describe several approximate forms for the characteristic time of Néel rotations with varying properties and external influences. We then present stochastic simulations that show agreement between the approximate expressions and the micromagnetic model. The simulations show nonlinear imaginary responses and associated relaxational hysteresis due to the field and frequency dependencies of the magnetization. This suggests that efficient heating is possible by matching fields to particles instead of resorting to maximizing the power of the applied magnetic fields.

  18. Nonlinear Global Optimization Using Curdling Algorithm

    Energy Science and Technology Software Center (ESTSC)

    1996-03-01

    An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less

  19. Inverting magnetic meridian data using nonlinear optimization

    NASA Astrophysics Data System (ADS)

    Connors, Martin; Rostoker, Gordon

    2015-09-01

    A nonlinear optimization algorithm coupled with a model of auroral current systems allows derivation of physical parameters from data and is the basis of a new inversion technique. We refer to this technique as automated forward modeling (AFM), with the variant used here being automated meridian modeling (AMM). AFM is applicable on scales from regional to global, yielding simple and easily understood output, and using only magnetic data with no assumptions about electrodynamic parameters. We have found the most useful output parameters to be the total current and the boundaries of the auroral electrojet on a meridian densely populated with magnetometers, as derived by AMM. Here, we describe application of AFM nonlinear optimization to magnetic data and then describe the use of AMM to study substorms with magnetic data from ground meridian chains as input. AMM inversion results are compared to optical data, results from other inversion methods, and field-aligned current data from AMPERE. AMM yields physical parameters meaningful in describing local electrodynamics and is suitable for ongoing monitoring of activity. The relation of AMM model parameters to equivalent currents is discussed, and the two are found to compare well if the field-aligned currents are far from the inversion meridian.

  20. Nonlinearity Analysis and Parameters Optimization for an Inductive Angle Sensor

    PubMed Central

    Ye, Lin; Yang, Ming; Xu, Liang; Zhuang, Xiaoqi; Dong, Zhaopeng; Li, Shiyang

    2014-01-01

    Using the finite element method (FEM) and particle swarm optimization (PSO), a nonlinearity analysis based on parameter optimization is proposed to design an inductive angle sensor. Due to the structure complexity of the sensor, understanding the influences of structure parameters on the nonlinearity errors is a critical step in designing an effective sensor. Key parameters are selected for the design based on the parameters' effects on the nonlinearity errors. The finite element method and particle swarm optimization are combined for the sensor design to get the minimal nonlinearity error. In the simulation, the nonlinearity error of the optimized sensor is 0.053% in the angle range from −60° to 60°. A prototype sensor is manufactured and measured experimentally, and the experimental nonlinearity error is 0.081% in the angle range from −60° to 60°. PMID:24590353

  1. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  2. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  3. Matching trajectory optimization and nonlinear tracking control for HALE

    NASA Astrophysics Data System (ADS)

    Lee, Sangjong; Jang, Jieun; Ryu, Hyeok; Lee, Kyun Ho

    2014-11-01

    This paper concerns optimal trajectory generation and nonlinear tracking control for stratospheric airship platform of VIA-200. To compensate for the mismatch between the point-mass model of optimal trajectory and the 6-DOF model of the nonlinear tracking problem, a new matching trajectory optimization approach is proposed. The proposed idea reduces the dissimilarity of both problems and reduces the uncertainties in the nonlinear equations of motion for stratospheric airship. In addition, its refined optimal trajectories yield better results under jet stream conditions during flight. The resultant optimal trajectories of VIA-200 are full three-dimensional ascent flight trajectories reflecting the realistic constraints of flight conditions and airship performance with and without a jet stream. Finally, 6-DOF nonlinear equations of motion are derived, including a moving wind field, and the vectorial backstepping approach is applied. The desirable tracking performance is demonstrated that application of the proposed matching optimization method enables the smooth linkage of trajectory optimization to tracking control problems.

  4. Large-scale hydropower system optimization using dynamic programming and object-oriented programming: the case of the Northeast China Power Grid.

    PubMed

    Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R

    2013-01-01

    This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results. PMID:24334896

  5. Interpretation of large-scale deviations from the Hubble flow

    NASA Astrophysics Data System (ADS)

    Grinstein, B.; Politzer, H. David; Rey, S.-J.; Wise, Mark B.

    1987-03-01

    The theoretical expectation for large-scale streaming velocities relative to the Hubble flow is expressed in terms of statistical correlation functions. Only for objects that trace the mass would these velocities have a simple cosmological interpretation. If some biasing effects the objects' formation, then nonlinear gravitational evolution is essential to predicting the expected large-scale velocities, which also depend on the nature of the biasing.

  6. Guaranteed robustness properties of multivariable nonlinear stochastic optimal regulators

    NASA Technical Reports Server (NTRS)

    Tsitsiklis, J. N.; Athans, M.

    1984-01-01

    The robustness of optimal regulators for nonlinear, deterministic and stochastic, multi-input dynamical systems is studied under the assumption that all state variables can be measured. It is shown that, under mild assumptions, such nonlinear regulators have a guaranteed infinite gain margin; moreover, they have a guaranteed 50 percent gain reduction margin and a 60 degree phase margin, in each feedback channel, provided that the system is linear in the control and the penalty to the control is quadratic, thus extending the well-known properties of LQ regulators to nonlinear optimal designs. These results are also valid for infinite horizon, average cost, stochastic optimal control problems.

  7. Guaranteed robustness properties of multivariable, nonlinear, stochastic optimal regulators

    NASA Technical Reports Server (NTRS)

    Tsitsiklis, J. N.; Athans, M.

    1983-01-01

    The robustness of optimal regulators for nonlinear, deterministic and stochastic, multi-input dynamical systems is studied under the assumption that all state variables can be measured. It is shown that, under mild assumptions, such nonlinear regulators have a guaranteed infinite gain margin; moreover, they have a guaranteed 50 percent gain reduction margin and a 60 degree phase margin, in each feedback channel, provided that the system is linear in the control and the penalty to the control is quadratic, thus extending the well-known properties of LQ regulators to nonlinear optimal designs. These results are also valid for infinite horizon, average cost, stochastic optimal control problems.

  8. On a Highly Nonlinear Self-Obstacle Optimal Control Problem

    SciTech Connect

    Di Donato, Daniela; Mugnai, Dimitri

    2015-10-15

    We consider a non-quadratic optimal control problem associated to a nonlinear elliptic variational inequality, where the obstacle is the control itself. We show that, fixed a desired profile, there exists an optimal solution which is not far from it. Detailed characterizations of the optimal solution are given, also in terms of approximating problems.

  9. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  10. Lyapunov optimal feedback control of a nonlinear inverted pendulum

    NASA Technical Reports Server (NTRS)

    Grantham, W. J.; Anderson, M. J.

    1989-01-01

    Liapunov optimal feedback control is applied to a nonlinear inverted pendulum in which the control torque was constrained to be less than the nonlinear gravity torque in the model. This necessitates a control algorithm which 'rocks' the pendulum out of its potential wells, in order to stabilize it at a unique vertical position. Simulation results indicate that a preliminary Liapunov feedback controller can successfully overcome the nonlinearity and bring almost all trajectories to the target.

  11. Revisiting interferences for measuring and optimizing optical nonlinearities

    NASA Astrophysics Data System (ADS)

    Billard, F.; Béjot, P.; Hertz, E.; Lavorel, B.; Faucher, O.

    2013-07-01

    A method based on optical interferences for measuring optical nonlinearities is presented. In a proof-of-principle experiment, the technique is applied to the experimental determination of the intensity dependence of the photoionization process. It is shown that it can also be used to control and optimize the nonlinear process itself at constant input energy. The presented strategy leads to enhancements that can reach several orders of magnitude for highly nonlinear processes.

  12. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  13. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  14. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  15. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  16. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    NASA Astrophysics Data System (ADS)

    Diwadkar, Amit; Vaidya, Umesh

    2016-04-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies.

  17. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  18. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links.

    PubMed

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  19. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  20. Genetic Algorithm Based Neural Networks for Nonlinear Optimization

    Energy Science and Technology Software Center (ESTSC)

    1994-09-28

    This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmore » the energy surface.« less

  1. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  2. Implicit solution of large-scale radiation diffusion problems

    SciTech Connect

    Brown, P N; Graziani, F; Otero, I; Woodward, C S

    2001-01-04

    In this paper, we present an efficient solution approach for fully implicit, large-scale, nonlinear radiation diffusion problems. The fully implicit approach is compared to a semi-implicit solution method. Accuracy and efficiency are shown to be better for the fully implicit method on both one- and three-dimensional problems with tabular opacities taken from the LEOS opacity library.

  3. Nonlinear model predictive control based on collective neurodynamic optimization.

    PubMed

    Yan, Zheng; Wang, Jun

    2015-04-01

    In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach. PMID:25608315

  4. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  5. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  6. Decentralized stabilization for a class of continuous-time nonlinear interconnected systems using online learning optimal control approach.

    PubMed

    Liu, Derong; Wang, Ding; Li, Hongliang

    2014-02-01

    In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme. PMID:24807039

  7. On optimal nonlinear estimation. I - Continuous observation.

    NASA Technical Reports Server (NTRS)

    Lo, J. T.

    1973-01-01

    A generalization of Bucy's (1965) representation theorem is obtained under very weak hypotheses. The generalized theorem is shown to play the same role in the case of general optimal estimation for an arbitrary random process as does the Bucy theorem in the case of optimal filtering for a diffusion process. At least for the models considered, the possibility is pointed out to reduce all sequential estimation problems to the problem of filtering. Hence, filtering theory is seen to represent the core of estimation theory, and is believed to define the direction in which future research should be focused.

  8. Online optimization of storage ring nonlinear beam dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Xiaobiao; Safranek, James

    2015-08-01

    We propose to optimize the nonlinear beam dynamics of existing and future storage rings with direct online optimization techniques. This approach may have crucial importance for the implementation of diffraction limited storage rings. In this paper considerations and algorithms for the online optimization approach are discussed. We have applied this approach to experimentally improve the dynamic aperture of the SPEAR3 storage ring with the robust conjugate direction search method and the particle swarm optimization method. The dynamic aperture was improved by more than 5 mm within a short period of time. Experimental setup and results are presented.

  9. Picosecond laser-driven terahertz radiation from large scale preplasmas of solid targets

    NASA Astrophysics Data System (ADS)

    Liao, G. Q.; Li, Y. T.; Li, C.; Su, L. N.; Zheng, Y.; Liu, M.; Dunn, J.; Nilsen, J.; Hunter, J.; Wang, W. M.; Sheng, Z. M.; Zhang, J.

    2016-05-01

    The terahertz (THz) radiation from the front of solid targets with a large-scale preplasma irradiated by relativistic picosecond laser pulses has been studied. The THz radiation measured at the specular direction nonlinearly increases with laser energy and an optimal plasma density scalelength is observed. Particle-in-cell simulations indicate that the radiation can be attributed to the model of mode conversion. While the THz radiation near the target normal direction is saturated with laser energy and plasma scalelength. Unlike the radiation in the specular direction’ the transient current formed at the plasma-vacuum interface could be responsible for the radiation near the target normal.

  10. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  11. Optimal state discrimination and unstructured search in nonlinear quantum mechanics

    NASA Astrophysics Data System (ADS)

    Childs, Andrew M.; Young, Joshua

    2016-02-01

    Nonlinear variants of quantum mechanics can solve tasks that are impossible in standard quantum theory, such as perfectly distinguishing nonorthogonal states. Here we derive the optimal protocol for distinguishing two states of a qubit using the Gross-Pitaevskii equation, a model of nonlinear quantum mechanics that arises as an effective description of Bose-Einstein condensates. Using this protocol, we present an algorithm for unstructured search in the Gross-Pitaevskii model, obtaining an exponential improvement over a previous algorithm of Meyer and Wong. This result establishes a limitation on the effectiveness of the Gross-Pitaevskii approximation. More generally, we demonstrate similar behavior under a family of related nonlinearities, giving evidence that the ability to quickly discriminate nonorthogonal states and thereby solve unstructured search is a generic feature of nonlinear quantum mechanics.

  12. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations

    PubMed Central

    Baranwal, Vipul K.; Pandey, Ram K.

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ0, γ1, γ2,… and auxiliary functions H0(x), H1(x), H2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  13. Nonlinear optimization with linear constraints using a projection method

    NASA Technical Reports Server (NTRS)

    Fox, T.

    1982-01-01

    Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.

  14. Route Monopolie and Optimal Nonlinear Pricing

    NASA Technical Reports Server (NTRS)

    Tournut, Jacques

    2003-01-01

    To cope with air traffic growth and congested airports, two solutions are apparent on the supply side: 1) use larger aircraft in the hub and spoke system; or 2) develop new routes through secondary airports. An enlarged route system through secondary airports may increase the proportion of route monopolies in the air transport market.The monopoly optimal non linear pricing policy is well known in the case of one dimension (one instrument, one characteristic) but not in the case of several dimensions. This paper explores the robustness of the one dimensional screening model with respect to increasing the number of instruments and the number of characteristics. The objective of this paper is then to link and fill the gap in both literatures. One of the merits of the screening model has been to show that a great varieD" of economic questions (non linear pricing, product line choice, auction design, income taxation, regulation...) could be handled within the same framework.VCe study a case of non linear pricing (2 instruments (2 routes on which the airline pro_ddes customers with services), 2 characteristics (demand of services on these routes) and two values per characteristic (low and high demand of services on these routes)) and we show that none of the conclusions of the one dimensional analysis remain valid. In particular, upward incentive compatibility constraint may be binding at the optimum. As a consequence, they may be distortion at the top of the distribution. In addition to this, we show that the optimal solution often requires a kind of form of bundling, we explain explicitly distortions and show that it is sometimes optimal for the monopolist to only produce one good (instead of two) or to exclude some buyers from the market. Actually, this means that the monopolist cannot fully apply his monopoly power and is better off selling both goods independently.We then define all the possible solutions in the case of a quadratic cost function for a uniform

  15. Fully localised nonlinear energy growth optimals in pipe flow

    SciTech Connect

    Pringle, Chris C. T.; Willis, Ashley P.; Kerswell, Rich R.

    2015-06-15

    A new, fully localised, energy growth optimal is found over large times and in long pipe domains at a given mass flow rate. This optimal emerges at a threshold disturbance energy below which a nonlinear version of the known (streamwise-independent) linear optimal [P. J. Schmid and D. S. Henningson, “Optimal energy density growth in Hagen-Poiseuille flow,” J. Fluid Mech. 277, 192–225 (1994)] is selected and appears to remain the optimal up until the critical energy at which transition is triggered. The form of this optimal is similar to that found in short pipes [Pringle et al., “Minimal seeds for shear flow turbulence: Using nonlinear transient growth to touch the edge of chaos,” J. Fluid Mech. 702, 415–443 (2012)], but now with full localisation in the streamwise direction. This fully localised optimal perturbation represents the best approximation yet of the minimal seed (the smallest perturbation which is arbitrarily close to states capable of triggering a turbulent episode) for “real” (laboratory) pipe flows. Dependence of the optimal with respect to several parameters has been computed and establishes that the structure is robust.

  16. Fully localised nonlinear energy growth optimals in pipe flow

    NASA Astrophysics Data System (ADS)

    Pringle, Chris C. T.; Willis, Ashley P.; Kerswell, Rich R.

    2015-06-01

    A new, fully localised, energy growth optimal is found over large times and in long pipe domains at a given mass flow rate. This optimal emerges at a threshold disturbance energy below which a nonlinear version of the known (streamwise-independent) linear optimal [P. J. Schmid and D. S. Henningson, "Optimal energy density growth in Hagen-Poiseuille flow," J. Fluid Mech. 277, 192-225 (1994)] is selected and appears to remain the optimal up until the critical energy at which transition is triggered. The form of this optimal is similar to that found in short pipes [Pringle et al., "Minimal seeds for shear flow turbulence: Using nonlinear transient growth to touch the edge of chaos," J. Fluid Mech. 702, 415-443 (2012)], but now with full localisation in the streamwise direction. This fully localised optimal perturbation represents the best approximation yet of the minimal seed (the smallest perturbation which is arbitrarily close to states capable of triggering a turbulent episode) for "real" (laboratory) pipe flows. Dependence of the optimal with respect to several parameters has been computed and establishes that the structure is robust.

  17. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  18. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  19. Economic dispatch control for large scale thermal power systems

    SciTech Connect

    Not Available

    1986-01-01

    A realistic model for economic dispatch control (EDC) which is valid for large scale thermal power system is described. This model properly accounts for the nonlinearities of the generation cost-curves introduced by the operation constraints of thermal units. The methodology of this model computes the optimal readjustments of generation schedules such that their total generation output meets the system demand, including the Area Control Error (ACE). The objective function to be minimized is the instantaneous operating cost of a power system subjected to several equality and inequality constraints, which represent the performance characteristics and operation limitations of the various units in the system as well as the active power loss in the transmission network. The generation cost curves and the active losses are represented using one of two models. The first model includes the exact piecewise linear curve formulation and the well known loss formula, while the second one considers a second order polynomial approximation of the generation cost curves and assumes that the active network losses are independent on the generation configuration and have constant percentage value from the total system demand. Each of these models has its merits to EDC strategies. 10 references, 7 figures, 3 tables.

  20. An active set algorithm for nonlinear optimization with polyhedral constraints

    NASA Astrophysics Data System (ADS)

    Hager, William W.; Zhang, Hongchao

    2016-08-01

    A polyhedral active set algorithm PASA is developed for solving a nonlinear optimization problem whose feasible set is a polyhedron. Phase one of the algorithm is the gradient projection method, while phase two is any algorithm for solving a linearly constrained optimization problem. Rules are provided for branching between the two phases. Global convergence to a stationary point is established, while asymptotically PASA performs only phase two when either a nondegeneracy assumption holds, or the active constraints are linearly independent and a strong second-order sufficient optimality condition holds.

  1. Nonlinear Inertia Weighted Teaching-Learning-Based Optimization for Solving Global Optimization Problem

    PubMed Central

    Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru

    2015-01-01

    Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well. PMID:26421005

  2. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  3. Optimal bipedal interactions with dynamic terrain: synthesis and analysis via nonlinear programming

    NASA Astrophysics Data System (ADS)

    Hubicki, Christian; Goldman, Daniel; Ames, Aaron

    In terrestrial locomotion, gait dynamics and motor control behaviors are tuned to interact efficiently and stably with the dynamics of the terrain (i.e. terradynamics). This controlled interaction must be particularly thoughtful in bipeds, as their reduced contact points render them highly susceptible to falls. While bipedalism under rigid terrain assumptions is well-studied, insights for two-legged locomotion on soft terrain, such as sand and dirt, are comparatively sparse. We seek an understanding of how biological bipeds stably and economically negotiate granular media, with an eye toward imbuing those abilities in bipedal robots. We present a trajectory optimization method for controlled systems subject to granular intrusion. By formulating a large-scale nonlinear program (NLP) with reduced-order resistive force theory (RFT) models and jamming cone dynamics, the optimized motions are informed and shaped by the dynamics of the terrain. Using a variant of direct collocation methods, we can express all optimization objectives and constraints in closed-form, resulting in rapid solving by standard NLP solvers, such as IPOPT. We employ this tool to analyze emergent features of bipedal locomotion in granular media, with an eye toward robotic implementation.

  4. Gravity and large-scale nonlocal bias

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Scoccimarro, Román; Sheth, Ravi K.

    2012-04-01

    For Gaussian primordial fluctuations the relationship between galaxy and matter overdensities, bias, is most often assumed to be local at the time of observation in the large-scale limit. This hypothesis is however unstable under time evolution, we provide proofs under several (increasingly more realistic) sets of assumptions. In the simplest toy model galaxies are created locally and linearly biased at a single formation time, and subsequently move with the dark matter (no velocity bias) conserving their comoving number density (no merging). We show that, after this formation time, the bias becomes unavoidably nonlocal and nonlinear at large scales. We identify the nonlocal gravitationally induced fields in which the galaxy overdensity can be expanded, showing that they can be constructed out of the invariants of the deformation tensor (Galileons), the main signature of which is a quadrupole field in second-order perturbation theory. In addition, we show that this result persists if we include an arbitrary evolution of the comoving number density of tracers. We then include velocity bias, and show that new contributions appear; these are related to the breaking of Galilean invariance of the bias relation, a dipole field being the signature at second order. We test these predictions by studying the dependence of halo overdensities in cells of fixed dark matter density: measurements in simulations show that departures from the mean bias relation are strongly correlated with the nonlocal gravitationally induced fields identified by our formalism, suggesting that the halo distribution at the present time is indeed more closely related to the mass distribution at an earlier rather than present time. However, the nonlocality seen in the simulations is not fully captured by assuming local bias in Lagrangian space. The effects on nonlocal bias seen in the simulations are most important for the most biased halos, as expected from our predictions. Accounting for these

  5. Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet

    Energy Science and Technology Software Center (ESTSC)

    1997-08-05

    An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmore » not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.« less

  6. Optimal spacecraft attitude control using collocation and nonlinear programming

    NASA Astrophysics Data System (ADS)

    Herman, A. L.; Conway, B. A.

    1992-10-01

    Direct collocation with nonlinear programming (DCNLP) is employed to find the optimal open-loop control histories for detumbling a disabled satellite. The controls are torques and forces applied to the docking arm and joint and torques applied about the body axes of the OMV. Solutions are obtained for cases in which various constraints are placed on the controls and in which the number of controls is reduced or increased from that considered in Conway and Widhalm (1986). DCLNP works well when applied to the optimal control problem of satellite attitude control. The formulation is straightforward and produces good results in a relatively small amount of time on a Cray X/MP with no a priori information about the optimal solution. The addition of joint acceleration to the controls significantly reduces the control magnitudes and optimal cost. In all cases, the torques and acclerations are modest and the optimal cost is very modest.

  7. Lagrangian space consistency relation for large scale structure

    NASA Astrophysics Data System (ADS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  8. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  9. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  10. Design of Life Extending Controls Using Nonlinear Parameter Optimization

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Holmes, Michael S.; Ray, Asok

    1998-01-01

    This report presents the conceptual development of a life extending control system where the objective is to achieve high performance and structural durability of the plant. A life extending controller is designed for a reusable rocket engine via damage mitigation in both the fuel and oxidizer turbines while achieving high performance for transient responses of the combustion chamber pressure and the O2/H2 mixture ratio. This design approach makes use of a combination of linear and nonlinear controller synthesis techniques and also allows adaptation of the life extending controller module to augment a conventional performance controller of a rocket engine. The nonlinear aspect of the design is achieved using nonlinear parameter optimization of a prescribed control structure.

  11. OPT++: An object-oriented class library for nonlinear optimization

    SciTech Connect

    Meza, J.C.

    1994-03-01

    Object-oriented programming is becoming a popular way of developing new software. The promise of this new programming paradigm is that software developed through these concepts will be more reliable and easier to re-use, thereby decreasing the time and cost of the software development cycle. This report describes the development of a C++ class library for nonlinear optimization. Using object-oriented techniques, this new library was designed so that the interface is easy to use while being general enough so that new optimization algorithms can be added easily to the existing framework.

  12. Combining flux and energy balance analysis to model large-scale biochemical networks.

    PubMed

    Heuett, William J; Qian, Hong

    2006-12-01

    Stoichiometric Network Theory is a constraints-based, optimization approach for quantitative analysis of the phenotypes of large-scale biochemical networks that avoids the use of detailed kinetics. This approach uses the reaction stoichiometric matrix in conjunction with constraints provided by flux balance and energy balance to guarantee mass conserved and thermodynamically allowable predictions. However, the flux and energy balance constraints have not been effectively applied simultaneously on the genome scale because optimization under the combined constraints is non-linear. In this paper, a sequential quadratic programming algorithm that solves the non-linear optimization problem is introduced. A simple example and the system of fermentation in Saccharomyces cerevisiae are used to illustrate the new method. The algorithm allows the use of non-linear objective functions. As a result, we suggest a novel optimization with respect to the heat dissipation rate of a system. We also emphasize the importance of incorporating interactions between a model network and its surroundings. PMID:17245812

  13. Continuation and bifurcation analysis of large-scale dynamical systems with LOCA.

    SciTech Connect

    Salinger, Andrew Gerhard; Phipps, Eric Todd; Pawlowski, Roger Patrick

    2010-06-01

    Dynamical systems theory provides a powerful framework for understanding the behavior of complex evolving systems. However applying these ideas to large-scale dynamical systems such as discretizations of multi-dimensional PDEs is challenging. Such systems can easily give rise to problems with billions of dynamical variables, requiring specialized numerical algorithms implemented on high performance computing architectures with thousands of processors. This talk will describe LOCA, the Library of Continuation Algorithms, a suite of scalable continuation and bifurcation tools optimized for these types of systems that is part of the Trilinos software collection. In particular, we will describe continuation and bifurcation analysis techniques designed for large-scale dynamical systems that are based on specialized parallel linear algebra methods for solving augmented linear systems. We will also discuss several other Trilinos tools providing nonlinear solvers (NOX), eigensolvers (Anasazi), iterative linear solvers (AztecOO and Belos), preconditioners (Ifpack, ML, Amesos) and parallel linear algebra data structures (Epetra and Tpetra) that LOCA can leverage for efficient and scalable analysis of large-scale dynamical systems.

  14. Passive and Active Vibrations Allow Self-Organization in Large-Scale Electromechanical Systems

    NASA Astrophysics Data System (ADS)

    Buscarino, Arturo; Fortuna, Carlo Famoso Luigi; Frasca, Mattia

    2016-06-01

    In this paper, the role of passive and active vibrations for the control of nonlinear large-scale electromechanical systems is investigated. The mathematical model of the system is discussed and detailed experimental results are shown in order to prove that coupling the effects of feedback and vibrations elicited by proper control signals makes possible to regularize imperfect uncertain large-scale systems.

  15. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  16. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  17. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  18. Spin glasses and nonlinear constraints in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    2014-01-01

    We discuss the portfolio optimization problem with the obligatory deposits constraint. Recently it has been shown that as a consequence of this nonlinear constraint, the solution consists of an exponentially large number of optimal portfolios, completely different from each other, and extremely sensitive to any changes in the input parameters of the problem, making the concept of rational decision making questionable. Here we reformulate the problem using a quadratic obligatory deposits constraint, and we show that from the physics point of view, finding an optimal portfolio amounts to calculating the mean-field magnetizations of a random Ising model with the constraint of a constant magnetization norm. We show that the model reduces to an eigenproblem, with 2N solutions, where N is the number of assets defining the portfolio. Also, in order to illustrate our results, we present a detailed numerical example of a portfolio of several risky common stocks traded on the Nasdaq Market.

  19. Topology optimization for nonlinear dynamic problems: Considerations for automotive crashworthiness

    NASA Astrophysics Data System (ADS)

    Kaushik, Anshul; Ramani, Anand

    2014-04-01

    Crashworthiness of automotive structures is most often engineered after an optimal topology has been arrived at using other design considerations. This study is an attempt to incorporate crashworthiness requirements upfront in the topology synthesis process using a mathematically consistent framework. It proposes the use of equivalent linear systems from the nonlinear dynamic simulation in conjunction with a discrete-material topology optimizer. Velocity and acceleration constraints are consistently incorporated in the optimization set-up. Issues specific to crash problems due to the explicit solution methodology employed, nature of the boundary conditions imposed on the structure, etc. are discussed and possible resolutions are proposed. A demonstration of the methodology on two-dimensional problems that address some of the structural requirements and the types of loading typical of frontal and side impact is provided in order to show that this methodology has the potential for topology synthesis incorporating crashworthiness requirements.

  20. A hybrid nonlinear programming method for design optimization

    NASA Technical Reports Server (NTRS)

    Rajan, S. D.

    1986-01-01

    Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.

  1. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    NASA Astrophysics Data System (ADS)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  2. Photorealistic large-scale urban city model reconstruction.

    PubMed

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite). PMID:19423889

  3. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  4. Simulation-based optimal Bayesian experimental design for nonlinear systems

    SciTech Connect

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics.

  5. Optimal Complexity of Nonlinear Rainfall-Runoff Models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J.; van de Giesen, N.; Fenicia, F.

    2008-12-01

    Identification of an appropriate level of model complexity to accurately translate rainfall into runoff remains an unresolved issue. The model has to be complex enough to generate accurate predictions, but not too complex such that its parameters cannot be reliably estimated from the data. Earlier work with linear models (Jakeman and Hornberger, 1993) concluded that a model with 4 to 5 parameters is sufficient. However, more recent results with a nonlinear model (Vrugt et al., 2006) suggest that 10 or more parameters may be identified from daily rainfall-runoff time-series. The goal here is to systematically investigate optimal complexity of nonlinear rainfall-runoff models, yielding accurate models with identifiable parameters. Our methodology consists of four steps: (i) a priori specification of a family of model structures from which to pick an optimal one, (ii) parameter optimization of each model structure to estimate empirical or calibration error, (iii) estimation of parameter uncertainty of each calibrated model structure, and (iv) estimation of prediction error of each calibrated model structure. For the first step we formulate a flexible model structure that allows us to systematically vary the complexity with which physical processes are simulated. The second and third steps are achieved using a recently developed Markov chain Monte Carlo algorithm (DREAM), which minimizes calibration error yielding optimal parameter values and their underlying posterior probability density function. Finally, we compare several methods for estimating prediction error of each model structure, including statistical methods based on information criteria and split-sample calibration-validation. Estimates of parameter uncertainty and prediction error are then used to identify optimal complexity for rainfall-runoff modeling, using data from dry and wet MOPEX catchments as case studies.

  6. A forward method for optimal stochastic nonlinear and adaptive control

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1988-01-01

    A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.

  7. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  8. Nonlinear optimization of acoustic energy harvesting using piezoelectric devices.

    PubMed

    Lallart, Mickaeël; Guyomar, Daniel; Richard, Claude; Petit, Lionel

    2010-11-01

    In the first part of the paper, a single degree-of-freedom model of a vibrating membrane with piezoelectric inserts is introduced and is initially applied to the case when a plane wave is incident with frequency close to one of the resonance frequencies. The model is a prototype of a device which converts ambient acoustical energy to electrical energy with the use of piezoelectric devices. The paper then proposes an enhancement of the energy harvesting process using a nonlinear processing of the output voltage of piezoelectric actuators, and suggests that this improves the energy conversion and reduces the sensitivity to frequency drifts. A theoretical discussion is given for the electrical power that can be expected making use of various models. This and supporting experimental results suggest that a nonlinear optimization approach allows a gain of up to 10 in harvested energy and a doubling of the bandwidth. A model is introduced in the latter part of the paper for predicting the behavior of the energy-harvesting device with changes in acoustic frequency, this model taking into account the damping effect and the frequency changes introduced by the nonlinear processes in the device. PMID:21110569

  9. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  10. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  11. Solving Large-scale Eigenvalue Problems in SciDACApplications

    SciTech Connect

    Yang, Chao

    2005-06-29

    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  12. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  13. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  14. Large Scale Commodity Clusters for Lattice QCD

    SciTech Connect

    A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson

    2002-06-01

    We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.

  15. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  16. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  17. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-07-01

    The Jacksonville Electric Authority's large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy's Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process in included.

  18. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-04-01

    The Jacksonville Electric Authority`s large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy`s Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process is included.

  19. Large scale structure in universes dominated by cold dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard

    1986-01-01

    The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.

  20. Nonlinear Burn Control and Operating Point Optimization in ITER

    NASA Astrophysics Data System (ADS)

    Boyer, Mark; Schuster, Eugenio

    2013-10-01

    Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).

  1. Optimal spatiotemporal reduced order modeling for nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    LaBryer, Allen

    Proposed in this dissertation is a novel reduced order modeling (ROM) framework called optimal spatiotemporal reduced order modeling (OPSTROM) for nonlinear dynamical systems. The OPSTROM approach is a data-driven methodology for the synthesis of multiscale reduced order models (ROMs) which can be used to enhance the efficiency and reliability of under-resolved simulations for nonlinear dynamical systems. In the context of nonlinear continuum dynamics, the OPSTROM approach relies on the concept of embedding subgrid-scale models into the governing equations in order to account for the effects due to unresolved spatial and temporal scales. Traditional ROMs neglect these effects, whereas most other multiscale ROMs account for these effects in ways that are inconsistent with the underlying spatiotemporal statistical structure of the nonlinear dynamical system. The OPSTROM framework presented in this dissertation begins with a general system of partial differential equations, which are modified for an under-resolved simulation in space and time with an arbitrary discretization scheme. Basic filtering concepts are used to demonstrate the manner in which residual terms, representing subgrid-scale dynamics, arise with a coarse computational grid. Models for these residual terms are then developed by accounting for the underlying spatiotemporal statistical structure in a consistent manner. These subgrid-scale models are designed to provide closure by accounting for the dynamic interactions between spatiotemporal macroscales and microscales which are otherwise neglected in a ROM. For a given resolution, the predictions obtained with the modified system of equations are optimal (in a mean-square sense) as the subgrid-scale models are based upon principles of mean-square error minimization, conditional expectations and stochastic estimation. Methods are suggested for efficient model construction, appraisal, error measure, and implementation with a couple of well-known time

  2. Optimization of microscopic and macroscopic second order optical nonlinearities

    NASA Technical Reports Server (NTRS)

    Marder, Seth R.; Perry, Joseph W.

    1993-01-01

    Nonlinear optical materials (NLO) can be used to extend the useful frequency range of lasers. Frequency generation is important for laser-based remote sensing and optical data storage. Another NLO effect, the electro-optic effect, can be used to modulate the amplitude, phase, or polarization state of an optical beam. Applications of this effect in telecommunications and in integrated optics include the impression of information on an optical carrier signal or routing of optical signals between fiber optic channels. In order to utilize these effects most effectively, it is necessary to synthesize materials which respond to applied fields very efficiently. In this talk, it will be shown how the development of a fundamental understanding of the science of nonlinear optics can lead to a rational approach to organic molecules and materials with optimized properties. In some cases, figures of merit for newly developed materials are more than an order of magnitude higher than those of currently employed materials. Some of these materials are being examined for phased-array radar and other electro-optic switching applications.

  3. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  4. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  5. Nonlinearly-constrained optimization using asynchronous parallel generating set search.

    SciTech Connect

    Griffin, Joshua D.; Kolda, Tamara Gibson

    2007-05-01

    Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivative-free optimization of unconstrained and linearly-constrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearly-constrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivative-based optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothed-exact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since many CS&E optimization problems are characterized by expensive function evaluations, reducing the number of function evaluations is paramount, and the results of this paper show that exact and smoothed-exact penalty functions are well-suited to this task.

  6. Slow, large scales from fast, small ones in dispersive wave turbulence

    NASA Astrophysics Data System (ADS)

    Smith, Leslie; Waleffe, Fabian

    2000-11-01

    Dispersive wave turbulence in systems of geophysical interest (beta-plane, rotating, stratified and rotating-stratified flows) has been simulated with random, isotropic small scale forcing and hyper-viscosity. This can be thought of as a Langevin model of the small space-time scales only with potential implications for climate modeling. In all cases, slow, coherent large scales are generated after long times of 2nd order in the nonlinear time scale. These slow, large scales ultimately dominate the flows. Beta-plane and rotating flow results were reported earlier [PoF 11, 1608]. In stratified flows, the energy accumulates in a 1D vertically sheared flow at selected large scales. As the rotation rate is increased, a progressive transition toward generation of all large scale vortical zero modes (quasi-geostrophic 3D flow) is observed. For yet higher rotation rate, energy accumulates primarily in a 2D quasi-geostrophic flow (cyclonic vortices) at all large scales.

  7. Large-scale extraction of proteins.

    PubMed

    Cunha, Teresa; Aires-Barros, Raquel

    2002-01-01

    The production of foreign proteins using selected host with the necessary posttranslational modifications is one of the key successes in modern biotechnology. This methodology allows the industrial production of proteins that otherwise are produced in small quantities. However, the separation and purification of these proteins from the fermentation media constitutes a major bottleneck for the widespread commercialization of recombinant proteins. The major production costs (50-90%) for typical biological product resides in the purification strategy. There is a need for efficient, effective, and economic large-scale bioseparation techniques, to achieve high purity and high recovery, while maintaining the biological activity of the molecule. Aqueous two-phase systems (ATPS) allow process integration as simultaneously separation and concentration of the target protein is achieved, with posterior removal and recycle of the polymer. The ease of scale-up combined with the high partition coefficients obtained allow its potential application in large-scale downstream processing of proteins produced by fermentation. The equipment and the methodology for aqueous two-phase extraction of proteins on a large scale using mixer-settlerand column contractors are described. The operation of the columns, either stagewise or differential, are summarized. A brief description of the methods used to account for mass transfer coefficients, hydrodynamics parameters of hold-up, drop size, and velocity, back mixing in the phases, and flooding performance, required for column design, is also provided. PMID:11876297

  8. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  9. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    PubMed

    Nogaret, Alain; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  10. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157