Science.gov

Sample records for large-scale nonlinear optimization

  1. Large scale nonlinear programming for the optimization of spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    Arrieta-Camacho, Juan Jose

    . Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.

  2. Large scale nonlinear numerical optimal control for finite element models of flexible structures

    NASA Technical Reports Server (NTRS)

    Shoemaker, Christine A.; Liao, Li-Zhi

    1990-01-01

    This paper discusses the development of large scale numerical optimal control algorithms for nonlinear systems and their application to finite element models of structures. This work is based on our expansion of the optimal control algorithm (DDP) in the following steps: improvement of convergence for initial policies in non-convex regions, development of a numerically accurate penalty function method approach for constrained DDP problems, and parallel processing on supercomputers. The expanded constrained DDP algorithm was applied to the control of a four-bay, two dimensional truss with 12 soft members, which generates geometric nonlinearities. Using an explicit finite element model to describe the structural system requires 32 state variables and 10,000 time steps. Our numerical results indicate that for constrained or unconstrained structural problems with nonlinear dynamics, the results obtained by our expanded constrained DDP are significantly better than those obtained using linear-quadratic feedback control.

  3. On large-scale nonlinear programming techniques for solving optimal control problems

    SciTech Connect

    Faco, J.L.D.

    1994-12-31

    The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.

  4. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  5. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    SciTech Connect

    Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.; Boggs, Paul T.; Ray, Jaideep; van Bloemen Waanders, Bart Gustaaf

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This

  6. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  7. New methods for large scale local and global optimization

    NASA Astrophysics Data System (ADS)

    Byrd, Richard; Schnabel, Robert

    1994-07-01

    We have pursued all three topics described in the proposal during this research period. A large amount of effort has gone into the development of large scale global optimization methods for molecular configuration problems. We have developed new general purpose methods that combine efficient stochastic global optimization techniques with several new, more deterministic techniques that account for most of the computational effort, and the success, of the methods. We have applied our methods to Lennard-Jones problems with up to 75 atoms, to water clusters with up to 31, molecules, and polymers with up to 58 amino acids. The results appear to be the best so far by general purpose optimization methods, and appear to be leading to some interesting chemistry issues. Our research on the second topic, tensor methods, has addressed several areas. We have designed and implemented tensor methods for large sparse systems of nonlinear equations and nonlinear least squares, and have obtained excellent test results on a wide range of problems. We have also developed new tensor methods for nonlinearly constrained optimization problem, and have obtained promising theoretical and preliminary computational results. Finally, on the third topic, limited memory methods for large scale optimization, we have developed and implemented new, extremely efficient limited memory methods for bound constrained problems, and new limited memory trust regions methods, both using our-recently developed compact representations for quasi-Newton matrices. Computational test results for both methods are promising.

  8. Implicit solvers for large-scale nonlinear problems

    SciTech Connect

    Keyes, D E; Reynolds, D; Woodward, C S

    2006-07-13

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications.

  9. A competitive swarm optimizer for large scale optimization.

    PubMed

    Cheng, Ran; Jin, Yaochu

    2015-02-01

    In this paper, a novel competitive swarm optimizer (CSO) for large scale optimization is proposed. The algorithm is fundamentally inspired by the particle swarm optimization but is conceptually very different. In the proposed CSO, neither the personal best position of each particle nor the global best position (or neighborhood best positions) is involved in updating the particles. Instead, a pairwise competition mechanism is introduced, where the particle that loses the competition will update its position by learning from the winner. To understand the search behavior of the proposed CSO, a theoretical proof of convergence is provided, together with empirical analysis of its exploration and exploitation abilities showing that the proposed CSO achieves a good balance between exploration and exploitation. Despite its algorithmic simplicity, our empirical results demonstrate that the proposed CSO exhibits a better overall performance than five state-of-the-art metaheuristic algorithms on a set of widely used large scale optimization problems and is able to effectively solve problems of dimensionality up to 5000. PMID:24860047

  10. Quantum Noise in Large-Scale Coherent Nonlinear Photonic Circuits

    NASA Astrophysics Data System (ADS)

    Santori, Charles; Pelc, Jason S.; Beausoleil, Raymond G.; Tezak, Nikolas; Hamerly, Ryan; Mabuchi, Hideo

    2014-06-01

    A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasiprobability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total and functions as a four-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important property for scalability.

  11. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  12. Global smoothing and continuation for large-scale molecular optimization

    SciTech Connect

    More, J.J.; Wu, Zhijun

    1995-10-01

    We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.

  13. Decomposition and coordination of large-scale operations optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Ruoyu

    Nowadays, highly integrated manufacturing has resulted in more and more large-scale industrial operations. As one of the most effective strategies to ensure high-level operations in modern industry, large-scale engineering optimization has garnered a great amount of interest from academic scholars and industrial practitioners. Large-scale optimization problems frequently occur in industrial applications, and many of them naturally present special structure or can be transformed to taking special structure. Some decomposition and coordination methods have the potential to solve these problems at a reasonable speed. This thesis focuses on three classes of large-scale optimization problems: linear programming, quadratic programming, and mixed-integer programming problems. The main contributions include the design of structural complexity analysis for investigating scaling behavior and computational efficiency of decomposition strategies, novel coordination techniques and algorithms to improve the convergence behavior of decomposition and coordination methods, as well as the development of a decentralized optimization framework which embeds the decomposition strategies in a distributed computing environment. The complexity study can provide fundamental guidelines to practical applications of the decomposition and coordination methods. In this thesis, several case studies imply the viability of the proposed decentralized optimization techniques for real industrial applications. A pulp mill benchmark problem is used to investigate the applicability of the LP/QP decentralized optimization strategies, while a truck allocation problem in the decision support of mining operations is used to study the MILP decentralized optimization strategies.

  14. Nonlinear density fluctuation field theory for large scale structure

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Miao, Hai-Xing

    2009-05-01

    We develop an effective field theory of density fluctuations for a Newtonian self-gravitating N-body system in quasi-equilibrium and apply it to a homogeneous universe with small density fluctuations. Keeping the density fluctuations up to second order, we obtain the nonlinear field equation of 2-pt correlation ξ(r), which contains 3-pt correlation and formal ultra-violet divergences. By the Groth-Peebles hierarchical ansatz and mass renormalization, the equation becomes closed with two new terms beyond the Gaussian approximation, and their coefficients are taken as parameters. The analytic solution is obtained in terms of the hypergeometric functions, which is checked numerically. With one single set of two fixed parameters, the correlation ξ(r) and the corresponding power spectrum P(κ) simultaneously match the results from all the major surveys, such as APM, SDSS, 2dfGRS, and REFLEX. The model gives a unifying understanding of several seemingly unrelated features of large scale structure from a field-theoretical perspective. The theory is worth extending to study the evolution effects in an expanding universe.

  15. A multilevel optimization of large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, M. K.

    1976-01-01

    A multilevel feedback control scheme is proposed for optimization of large-scale systems composed of a number of (not necessarily weakly coupled) subsystems. Local controllers are used to optimize each subsystem, ignoring the interconnections. Then, a global controller may be applied to minimize the effect of interconnections and improve the performance of the overall system. At the cost of suboptimal performance, this optimization strategy ensures invariance of suboptimality and stability of the systems under structural perturbations whereby subsystems are disconnected and again connected during operation.

  16. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  17. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  18. Newton iterative methods for large scale nonlinear systems

    SciTech Connect

    Walker, H.F.; Turner, K.

    1993-01-01

    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  19. Efficient multiobjective optimization scheme for large scale structures

    NASA Astrophysics Data System (ADS)

    Grandhi, Ramana V.; Bharatram, Geetha; Venkayya, V. B.

    1992-09-01

    This paper presents a multiobjective optimization algorithm for an efficient design of large scale structures. The algorithm is based on generalized compound scaling techniques to reach the intersection of multiple functions. Multiple objective functions are treated similar to behavior constraints. Thus, any number of objectives can be handled in the formulation. Pseudo targets on objectives are generated at each iteration in computing the scale factors. The algorithm develops a partial Pareto set. This method is computationally efficient due to the fact that it does not solve many single objective optimization problems in reaching the Pareto set. The computational efficiency is compared with other multiobjective optimization methods, such as the weighting method and the global criterion method. Trusses, plate, and wing structure design cases with stress and frequency considerations are presented to demonstrate the effectiveness of the method.

  20. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  1. The GRG approach for large-scale optimization

    SciTech Connect

    Drud, A.

    1994-12-31

    The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.

  2. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model

  3. Nonlinear Generation of shear flows and large scale magnetic fields by small scale

    NASA Astrophysics Data System (ADS)

    Aburjania, G.

    2009-04-01

    EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge

  4. Large-scale optimal sensor array management for target tracking

    NASA Astrophysics Data System (ADS)

    Tharmarasa, Ratnasingham; Kirubarajan, Thiagalingam; Hernandez, Marcel L.

    2004-01-01

    Large-scale sensor array management has applications in a number of target tracking problems. For example, in ground target tracking, hundreds or even thousands of unattended ground sensors (UGS) may be dropped over a large surveillance area. At any one time it may then only be possible to utilize a very small number of the available sensors at the fusion center because of bandwidth limitations. A similar situation may arise in tracking sea surface or underwater targets using a large number of sonobuoys. The general problem is then to select a subset of the available sensors in order to optimize tracking performance. The Posterior Cramer-Rao Lower Bound (PCRLB), which quantifies the obtainable accuracy of target state estimation, is used as the basis for network management. In a practical scenario with even hundreds of sensors, the number of possible sensor combinations would make it impossible to enumerate all possibilities in real-time. Efficient local (or greedy) search techniques must then be used to make the computational load manageable. In this paper we introduce an efficient search strategy for selecting a subset of the sensor array for use during each sensor change interval in multi-target tracking. Simulation results illustrating the performance of the sensor array manager are also presented.

  5. Large-scale optimal sensor array management for target tracking

    NASA Astrophysics Data System (ADS)

    Tharmarasa, Ratnasingham; Kirubarajan, Thiagalingam; Hernandez, Marcel L.

    2003-12-01

    Large-scale sensor array management has applications in a number of target tracking problems. For example, in ground target tracking, hundreds or even thousands of unattended ground sensors (UGS) may be dropped over a large surveillance area. At any one time it may then only be possible to utilize a very small number of the available sensors at the fusion center because of bandwidth limitations. A similar situation may arise in tracking sea surface or underwater targets using a large number of sonobuoys. The general problem is then to select a subset of the available sensors in order to optimize tracking performance. The Posterior Cramer-Rao Lower Bound (PCRLB), which quantifies the obtainable accuracy of target state estimation, is used as the basis for network management. In a practical scenario with even hundreds of sensors, the number of possible sensor combinations would make it impossible to enumerate all possibilities in real-time. Efficient local (or greedy) search techniques must then be used to make the computational load manageable. In this paper we introduce an efficient search strategy for selecting a subset of the sensor array for use during each sensor change interval in multi-target tracking. Simulation results illustrating the performance of the sensor array manager are also presented.

  6. Operational Optimization of Large-Scale Parallel-Unit SWRO Desalination Plant Using Differential Evolution Algorithm

    PubMed Central

    Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180

  7. Operational optimization of large-scale parallel-unit SWRO desalination plant using differential evolution algorithm.

    PubMed

    Wang, Jian; Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping

    2014-01-01

    A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180

  8. Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.

    PubMed

    Chen, Mou; Tao, Gang

    2016-08-01

    In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems. PMID:26340792

  9. Iterative methods for large scale nonlinear and linear systems. Final report, 1994--1996

    SciTech Connect

    Walker, H.F.

    1997-09-01

    The major goal of this research has been to develop improved numerical methods for the solution of large-scale systems of linear and nonlinear equations, such as occur almost ubiquitously in the computational modeling of physical phenomena. The numerical methods of central interest have been Krylov subspace methods for linear systems, which have enjoyed great success in many large-scale applications, and newton-Krylov methods for nonlinear problems, which use Krylov subspace methods to solve approximately the linear systems that characterize Newton steps. Krylov subspace methods have undergone a remarkable development over the last decade or so and are now very widely used for the iterative solution of large-scale linear systems, particularly those that arise in the discretization of partial differential equations (PDEs) that occur in computational modeling. Newton-Krylov methods have enjoyed parallel success and are currently used in many nonlinear applications of great scientific and industrial importance. In addition to their effectiveness on important problems, Newton-Krylov methods also offer a nonlinear framework within which to transfer to the nonlinear setting any advances in Krylov subspace methods or preconditioning techniques, or new algorithms that exploit advanced machine architectures. This research has resulted in a number of improved Krylov and Newton-Krylov algorithms together with applications of these to important linear and nonlinear problems.

  10. Recent developments in large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Venkayya, Vipperla B.

    1989-01-01

    A brief discussion is given of mathematical optimization and the motivation for the development of more recent numerical search procedures. A review of recent developments and issues in multidisciplinary optimization is also presented. These developments are discussed in the context of the preliminary design of aircraft structures. A capability description of programs FASTOP, TSO, STARS, LAGRANGE, ELFINI and ASTROS is included.

  11. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    SciTech Connect

    Liao, Ben-Shan; Bai, Zhaojun; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  12. Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations

    NASA Astrophysics Data System (ADS)

    Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila

    2016-07-01

    It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.

  13. Analysis of some large-scale nonlinear stochastic dynamic systems with subspace-EPC method

    NASA Astrophysics Data System (ADS)

    Er, GuoKang; Iu, VaiPan

    2011-09-01

    The probabilistic solutions to some nonlinear stochastic dynamic (NSD) systems with various polynomial types of nonlinearities in displacements are analyzed with the subspace-exponential polynomial closure (subspace-EPC) method. The space of the state variables of the large-scale nonlinear stochastic dynamic system excited by Gaussian white noises is separated into two subspaces. Both sides of the Fokker-Planck-Kolmogorov (FPK) equation corresponding to the NSD system are then integrated over one of the subspaces. The FPK equation for the joint probability density function of the state variables in the other subspace is formulated. Therefore, the FPK equations in low dimensions are obtained from the original FPK equation in high dimensions and the FPK equations in low dimensions are solvable with the exponential polynomial closure method. Examples about multi-degree-offreedom NSD systems with various polynomial types of nonlinearities in displacements are given to show the effectiveness of the subspace-EPC method in these cases.

  14. On the importance of nonlinear couplings in large-scale neutrino streams

    NASA Astrophysics Data System (ADS)

    Dupuy, Hélène; Bernardeau, Francis

    2015-08-01

    We propose a procedure to evaluate the impact of nonlinear couplings on the evolution of massive neutrino streams in the context of large-scale structure growth. Such streams can be described by general nonlinear conservation equations, derived from a multiple-flow perspective, which generalize the conservation equations of non-relativistic pressureless fluids. The relevance of the nonlinear couplings is quantified with the help of the eikonal approximation applied to the subhorizon limit of this system. It highlights the role played by the relative displacements of different cosmic streams and it specifies, for each flow, the spatial scales at which the growth of structure is affected by nonlinear couplings. We found that, at redshift zero, such couplings can be significant for wavenumbers as small as k=0.2 h/Mpc for most of the neutrino streams.

  15. Tensor-Krylov methods for solving large-scale systems of nonlinear equations.

    SciTech Connect

    Bader, Brett William

    2004-08-01

    This paper develops and investigates iterative tensor methods for solving large-scale systems of nonlinear equations. Direct tensor methods for nonlinear equations have performed especially well on small, dense problems where the Jacobian matrix at the solution is singular or ill-conditioned, which may occur when approaching turning points, for example. This research extends direct tensor methods to large-scale problems by developing three tensor-Krylov methods that base each iteration upon a linear model augmented with a limited second-order term, which provides information lacking in a (nearly) singular Jacobian. The advantage of the new tensor-Krylov methods over existing large-scale tensor methods is their ability to solve the local tensor model to a specified accuracy, which produces a more accurate tensor step. The performance of these methods in comparison to Newton-GMRES and tensor-GMRES is explored on three Navier-Stokes fluid flow problems. The numerical results provide evidence that tensor-Krylov methods are generally more robust and more efficient than Newton-GMRES on some important and difficult problems. In addition, the results show that the new tensor-Krylov methods and tensor- GMRES each perform better in certain situations.

  16. The compressed state Kalman filter for nonlinear state estimation: Application to large-scale reservoir monitoring

    NASA Astrophysics Data System (ADS)

    Li, Judith Yue; Kokkinaki, Amalia; Ghorbanidehno, Hojat; Darve, Eric F.; Kitanidis, Peter K.

    2015-12-01

    Reservoir monitoring aims to provide snapshots of reservoir conditions and their uncertainties to assist operation management and risk analysis. These snapshots may contain millions of state variables, e.g., pressures and saturations, which can be estimated by assimilating data in real time using the Kalman filter (KF). However, the KF has a computational cost that scales quadratically with the number of unknowns, m, due to the cost of computing and storing the covariance and Jacobian matrices, along with their products. The compressed state Kalman filter (CSKF) adapts the KF for solving large-scale monitoring problems. The CSKF uses N preselected orthogonal bases to compute an accurate rank-N approximation of the covariance that is close to the optimal spectral approximation given by SVD. The CSKF has a computational cost that scales linearly in m and uses an efficient matrix-free approach that propagates uncertainties using N + 1 forward model evaluations, where N≪m. Here we present a generalized CSKF algorithm for nonlinear state estimation problems such as CO2 monitoring. For simultaneous estimation of multiple types of state variables, the algorithm allows selecting bases that represent the variability of each state type. Through synthetic numerical experiments of CO2 monitoring, we show that the CSKF can reproduce the Kalman gain accurately even for large compression ratios (m/N). For a given computational cost, the CSKF uses a robust and flexible compression scheme that gives more reliable uncertainty estimates than the ensemble Kalman filter, which may display loss of ensemble variability leading to suboptimal uncertainty estimates.

  17. Computation of Large-Scale Structure Jet Noise Sources With Weak Nonlinear Effects Using Linear Euler

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.

    2003-01-01

    An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.

  18. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. PMID:12820130

  19. Simulation and Optimization of Large Scale Subsurface Environmental Impacts; Investigations, Remedial Design and Long Term Monitoring

    SciTech Connect

    Deschaine, L.M.

    2008-07-01

    The global impact to human health and the environment from large scale chemical / radionuclide releases is well documented. Examples are the wide spread release of radionuclides from the Chernobyl nuclear reactors, the mobilization of arsenic in Bangladesh, the formation of Environmental Protection Agencies in the United States, Canada and Europe, and the like. The fiscal costs of addressing and remediating these issues on a global scale are astronomical, but then so are the fiscal and human health costs of ignoring them. An integrated methodology for optimizing the response(s) to these issues is needed. This work addresses development of optimal policy design for large scale, complex, environmental issues. It discusses the development, capabilities, and application of a hybrid system of algorithms that optimizes the environmental response. It is important to note that 'optimization' does not singularly refer to cost minimization, but to the effective and efficient balance of cost, performance, risk, management, and societal priorities along with uncertainty analysis. This tool integrates all of these elements into a single decision framework. It provides a consistent approach to designing optimal solutions that are tractable, traceable, and defensible. The system is modular and scalable. It can be applied either as individual components or in total. By developing the approach in a complex systems framework, a solution methodology represents a significant improvement over the non-optimal 'trial and error' approach to environmental response(s). Subsurface environmental processes are represented by linear and non-linear, elliptic and parabolic equations. The state equations solved using numerical methods include multi-phase flow (water, soil gas, NAPL), and multicomponent transport (radionuclides, heavy metals, volatile organics, explosives, etc.). Genetic programming is used to generate the simulators either when simulation models do not exist, or to extend the

  20. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  1. CMB lensing bispectrum from nonlinear growth of the large scale structure

    NASA Astrophysics Data System (ADS)

    Namikawa, Toshiya

    2016-06-01

    We discuss detectability of the nonlinear growth of the large-scale structure in the cosmic microwave background (CMB) lensing. The lensing signals involved in the CMB fluctuations have been measured from multiple CMB experiments, such as Atacama Cosmology Telescope (ACT), Planck, POLARBEAR, and South Pole Telescope (SPT). The reconstructed CMB lensing signals are useful to constrain cosmology via their angular power spectrum, while detectability and cosmological application of their bispectrum induced by the nonlinear evolution are not well studied. Extending the analytic estimate of the galaxy lensing bispectrum presented by Takada and Jain (2004) to the CMB case, we show that even near term CMB experiments such as Advanced ACT, Simons Array and SPT3G could detect the CMB lensing bispectrum induced by the nonlinear growth of the large-scale structure. In the case of the CMB Stage-IV, we find that the lensing bispectrum is detectable at ≳50 σ statistical significance. This precisely measured lensing bispectrum has rich cosmological information, and could be used to constrain cosmology, e.g., the sum of the neutrino masses and the dark-energy properties.

  2. Classification of large-scale stellar spectra based on the non-linearly assembling learning machine

    NASA Astrophysics Data System (ADS)

    Liu, Zhongbao; Song, Lipeng; Zhao, Wenjuan

    2016-02-01

    An important problem to be solved of traditional classification methods is they cannot deal with large-scale classification because of very high time complexity. In order to solve above problem, inspired by the thinking of collaborative management, the non-linearly assembling learning machine (NALM) is proposed and used in the large-scale stellar spectral classification. In NALM, the large-scale dataset is firstly divided into several subsets, and then the traditional classifiers such as support vector machine (SVM) runs on the subset, finally, the classification results on each subset are assembled and the overall classification decision is obtained. In comparative experiments, we investigate the performance of NALM in the stellar spectral subclasses classification compared with SVM. We apply SVM and NALM respectively to classify the four subclasses of K-type spectra, three subclasses of F-type spectra and three subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS). The comparative experiment results show that the performance of NALM is much better than SVM in view of the classification accuracy and the computation time.

  3. Towards a self-consistent halo model for the nonlinear large-scale structure

    NASA Astrophysics Data System (ADS)

    Schmidt, Fabian

    2016-03-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: (i) they do not enforce the stress-energy conservation of matter; (ii) they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model (EHM) that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed on large scales, and results of the perturbation theory and the effective field theory can, in principle, be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written here, this approach still does not describe the transition regime between perturbation theory and halo scales realistically, which is left as an open problem. We also show explicitly that, when implemented consistently, halo model predictions do not depend on any properties of low-mass halos that are smaller than the scales of interest.

  4. Destruction of large-scale magnetic field in non-linear simulations of the shear dynamo

    NASA Astrophysics Data System (ADS)

    Teed, Robert J.; Proctor, Michael R. E.

    2016-05-01

    The Sun's magnetic field exhibits coherence in space and time on much larger scales than the turbulent convection that ultimately powers the dynamo. In the past the α-effect (mean-field) concept has been used to model the solar cycle, but recent work has cast doubt on the validity of the mean-field ansatz under solar conditions. This indicates that one should seek an alternative mechanism for generating large-scale structure. One possibility is the recently proposed `shear dynamo' mechanism where large-scale magnetic fields are generated in the presence of a simple shear. Further investigation of this proposition is required, however, because work has been focused on the linear regime with a uniform shear profile thus far. In this paper we report results of the extension of the original shear dynamo model into the non-linear regime. We find that whilst large-scale structure can initially persist into the saturated regime, in several of our simulations it is destroyed via large increase in kinetic energy. This result casts doubt on the ability of the simple uniform shear dynamo mechanism to act as an alternative to the α-effect in solar conditions.

  5. Toward Optimal and Scalable Dimension Reduction Methods for large-scale Bayesian Inversions

    NASA Astrophysics Data System (ADS)

    Bousserez, N.; Henze, D. K.

    2015-12-01

    Many inverse problems in geophysics are solved within the Bayesian framework, in which a prior probability density function of a quantity of interest is optimally updated using newly available observations. A maximum likelihood of the posterior probability density function is estimated using a model of the physics that relates the variables to be optimized to the observations. However, in many practical situations the number of observations is much smaller than the number of variables estimated, which leads to an ill-posed problem. In practice, this means that the data are informative only in a subspace of the initial space. It is both of theoretical and practical interest to characterize this "data-informed" subspace, since it allows a simple interpretation of the inverse solution and its uncertainty, but can also dramatically reduce the computational cost of the optimization by reducing the size of the problem. In this presentation the formalism of dimension reduction in Bayesian methods will be introduced, and different optimality criteria will be discussed (e.g., minimum error variances, maximum degree of freedom for signal). For each criterion, an optimal design for the reduced Bayesian problem will be proposed and compared with other suboptimal approaches. A significant advantage of our method is its high scalability owing to an efficient parallel implementation, making it very attractive for large-scale inverse problems. Numerical results from an Observation Simulation System Experiment (OSSE) consisting of a high spatial resolution (0.5°x0.7°) source inversion of methane over North America using observations from the Greenhouse gases Observing SATellite (GOSAT) instrument and the GEOS-Chem chemistry-transport model will illustrate the computational efficiency of our approach. Although only linear models are considered in this study, possible extensions to the non-linear case will also be discussed

  6. The topology of large-scale structure. II - Nonlinear evolution of Gaussian models

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Weinberg, David H.; Gott, J. Richard, III

    1988-01-01

    The evolution of non-Gaussian behavior in the large-scale universe from Gaussian initial conditions is studied. Topology measures developed in previous papers are applied to the smoothed initial, final, and biased matter distributions of cold dark matter, white noise, and massive neutrino simulations. When the smoothing length is approximately twice the mass correlation length or larger, the evolved models look like the initial conditions, suggesting that random phase hypotheses in cosmology can be tested with adequate data sets. When a smaller smoothing length is used, nonlinear effects are recovered, so nonlinear effects on topology can be detected in redshift surveys after smoothing at the mean intergalaxy separation. Hot dark matter models develop manifestly non-Gaussian behavior attributable to phase correlations, with a topology reminiscent of bubble or sheet distributions. Cold dark matter models remain Gaussian, and biasing does not disguise this.

  7. Galilean invariance and the consistency relation for the nonlinear squeezed bispectrum of large scale structure

    SciTech Connect

    Peloso, Marco; Pietroni, Massimo E-mail: pietroni@pd.infn.it

    2013-05-01

    We discuss the constraints imposed on the nonlinear evolution of the Large Scale Structure (LSS) of the universe by galilean invariance, the symmetry relevant on subhorizon scales. Using Ward identities associated to the invariance, we derive fully nonlinear consistency relations between statistical correlators of the density and velocity perturbations, such as the power spectrum and the bispectrum. These relations are valid up to O(f{sub NL}{sup 2}) corrections. We then show that most of the semi-analytic methods proposed so far to resum the perturbative expansion of the LSS dynamics fail to fulfill the constraints imposed by galilean invariance, and are therefore susceptible to non-physical infrared effects. Finally, we identify and discuss a nonperturbative semi-analytical scheme which is manifestly galilean invariant at any order of its expansion.

  8. Nonlinear Seismic Correlation Analysis of the JNES/NUPEC Large-Scale Piping System Tests.

    SciTech Connect

    Nie,J.; DeGrassi, G.; Hofmayer, C.; Ali, S.

    2008-06-01

    The Japan Nuclear Energy Safety Organization/Nuclear Power Engineering Corporation (JNES/NUPEC) large-scale piping test program has provided valuable new test data on high level seismic elasto-plastic behavior and failure modes for typical nuclear power plant piping systems. The component and piping system tests demonstrated the strain ratcheting behavior that is expected to occur when a pressurized pipe is subjected to cyclic seismic loading. Under a collaboration agreement between the US and Japan on seismic issues, the US Nuclear Regulatory Commission (NRC)/Brookhaven National Laboratory (BNL) performed a correlation analysis of the large-scale piping system tests using derailed state-of-the-art nonlinear finite element models. Techniques are introduced to develop material models that can closely match the test data. The shaking table motions are examined. The analytical results are assessed in terms of the overall system responses and the strain ratcheting behavior at an elbow. The paper concludes with the insights about the accuracy of the analytical methods for use in performance assessments of highly nonlinear piping systems under large seismic motions.

  9. From Self-consistency to SOAR: Solving Large Scale NonlinearEigenvalue Problems

    SciTech Connect

    Bai, Zhaojun; Yang, Chao

    2006-02-01

    What is common among electronic structure calculation, design of MEMS devices, vibrational analysis of high speed railways, and simulation of the electromagnetic field of a particle accelerator? The answer: they all require solving large scale nonlinear eigenvalue problems. In fact, these are just a handful of examples in which solving nonlinear eigenvalue problems accurately and efficiently is becoming increasingly important. Recognizing the importance of this class of problems, an invited minisymposium dedicated to nonlinear eigenvalue problems was held at the 2005 SIAM Annual Meeting. The purpose of the minisymposium was to bring together numerical analysts and application scientists to showcase some of the cutting edge results from both communities and to discuss the challenges they are still facing. The minisymposium consisted of eight talks divided into two sessions. The first three talks focused on a type of nonlinear eigenvalue problem arising from electronic structure calculations. In this type of problem, the matrix Hamiltonian H depends, in a non-trivial way, on the set of eigenvectors X to be computed. The invariant subspace spanned by these eigenvectors also minimizes a total energy function that is highly nonlinear with respect to X on a manifold defined by a set of orthonormality constraints. In other applications, the nonlinearity of the matrix eigenvalue problem is restricted to the dependency of the matrix on the eigenvalues to be computed. These problems are often called polynomial or rational eigenvalue problems In the second session, Christian Mehl from Technical University of Berlin described numerical techniques for solving a special type of polynomial eigenvalue problem arising from vibration analysis of rail tracks excited by high-speed trains.

  10. Large-scale computational simulation for optimal design of curved piezoelectric actuator using composite material

    NASA Astrophysics Data System (ADS)

    Chung, Soon Wan; Hwang, In Seong; Kim, Seung Jo

    2004-07-01

    In this paper, the electromechanical displacements of curved piezoelectric actuators with laminated composite material are calculated using high performance computing technology, and the optimal configuration of composite curved actuator is proposed. To predict the pre-stress in the device due to the mismatch in coefficients of thermal expansion, carbon-epoxy and glass-epoxy as well as PZT ceramic are numerically modeled by using hexahedral solid elements. Because the modeling of these thin layers causes the numbers of degree of freedom to increase, large-scale structural analyses are performed through the PEGASUS supercomputer which is composed of 400 Intel Xeon CPUs. In the first stage, the curved shape of the actuator and the internal stress in each layer are obtained by the cured curvature analysis. Subsequently, the displacement due to the piezoelectric force by an applied voltage is also calculated and the performance of composite curved actuator is investigated by comparing the displacements according to the configuration of the actuator. To consider the finite deformation in the first stage and include the pre-stress in each layer in the second analysis stage, nonlinear finite element analyses will be carried out. The thickness and the elastic constants of laminated composite are chosen as design factors.

  11. Missing link: A nonlinear post-Friedmann framework for small and large scales

    NASA Astrophysics Data System (ADS)

    Milillo, Irene; Bertacca, Daniele; Bruni, Marco; Maselli, Andrea

    2015-07-01

    We present a nonlinear post-Friedmann framework for structure formation, generalizing to cosmology the weak-field (post-Minkowskian) approximation, unifying the treatment of small and large scales. We consider a universe filled with a pressureless fluid and a cosmological constant Λ , the theory of gravity is Einstein's general relativity and the background is the standard flat Λ CDM cosmological model. We expand the metric and the energy-momentum tensor in powers of 1 /c , keeping the matter density and peculiar velocity as exact fundamental variables. We assume the Poisson gauge, including scalar and tensor modes up to 1 /c4 order and vector modes up to 1 /c5 terms. Through a redefinition of the scalar potentials as a resummation of the metric contributions at different orders, we obtain a complete set of nonlinear equations, providing a unified framework to study structure formation from small to superhorizon scales, from the nonlinear Newtonian to the linear relativistic regime. We explicitly show the validity of our scheme in the two limits: at leading order we recover the fully nonlinear equations of Newtonian cosmology; when linearized, our equations become those for scalar and vector modes of first-order relativistic perturbation theory in the Poisson gauge. Tensor modes are nondynamical at the 1 /c4 order we consider (gravitational waves only appear at higher order): they are purely nonlinear and describe a distortion of the spatial slices determined at this order by a constraint, quadratic in the scalar and vector variables. The main results of our analysis are as follows: (a) at leading order a purely Newtonian nonlinear energy current sources a frame-dragging gravitomagnetic vector potential, and (b) in the leading-order Newtonian regime and in the linear relativistic regime, the two scalar metric potentials are the same, while the nonlinearity of general relativity makes them different. Possible applications of our formalism include the calculations

  12. Solving Large-scale Spatial Optimization Problems in Water Resources Management through Spatial Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Wang, J.; Cai, X.

    2007-12-01

    A water resources system can be defined as a large-scale spatial system, within which distributed ecological system interacts with the stream network and ground water system. Water resources management, the causative factors and hence the solutions to be developed have a significant spatial dimension. This motivates a modeling analysis of water resources management within a spatial analytical framework, where data is usually geo- referenced and in the form of a map. One of the important functions of Geographic information systems (GIS) is to identify spatial patterns of environmental variables. The role of spatial patterns in water resources management has been well established in the literature particularly regarding how to design better spatial patterns for satisfying the designated objectives of water resources management. Evolutionary algorithms (EA) have been demonstrated to be successful in solving complex optimization models for water resources management due to its flexibility to incorporate complex simulation models in the optimal search procedure. The idea of combining GIS and EA motivates the development and application of spatial evolutionary algorithms (SEA). SEA assimilates spatial information into EA, and even changes the representation and operators of EA. In an EA used for water resources management, the mathematical optimization model should be modified to account the spatial patterns; however, spatial patterns are usually implicit, and it is difficult to impose appropriate patterns to spatial data. Also it is difficult to express complex spatial patterns by explicit constraints included in the EA. The GIS can help identify the spatial linkages and correlations based on the spatial knowledge of the problem. These linkages are incorporated in the fitness function for the preference of the compatible vegetation distribution. Unlike a regular GA for spatial models, the SEA employs a special hierarchical hyper-population and spatial genetic operators

  13. Large-Scale Structure Formation: From the First Non-linear Objects to Massive Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Planelles, S.; Schleicher, D. R. G.; Bykov, A. M.

    2015-05-01

    The large-scale structure of the Universe formed from initially small perturbations in the cosmic density field, leading to galaxy clusters with up to 1015 M⊙ at the present day. Here, we review the formation of structures in the Universe, considering the first primordial galaxies and the most massive galaxy clusters as extreme cases of structure formation where fundamental processes such as gravity, turbulence, cooling and feedback are particularly relevant. The first non-linear objects in the Universe formed in dark matter halos with 105-108 M⊙ at redshifts 10-30, leading to the first stars and massive black holes. At later stages, larger scales became non-linear, leading to the formation of galaxy clusters, the most massive objects in the Universe. We describe here their formation via gravitational processes, including the self-similar scaling relations, as well as the observed deviations from such self-similarity and the related non-gravitational physics (cooling, stellar feedback, AGN). While on intermediate cluster scales the self-similar model is in good agreement with the observations, deviations from such self-similarity are apparent in the core regions, where numerical simulations do not reproduce the current observational results. The latter indicates that the interaction of different feedback processes may not be correctly accounted for in current simulations. Both in the most massive clusters of galaxies as well as during the formation of the first objects in the Universe, turbulent structures and shock waves appear to be common, suggesting them to be ubiquitous in the non-linear regime.

  14. A Limited-Memory BFGS Algorithm Based on a Trust-Region Quadratic Model for Large-Scale Nonlinear Equations

    PubMed Central

    Li, Yong; Yuan, Gonglin; Wei, Zengxin

    2015-01-01

    In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method. PMID:25950725

  15. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  16. Mathematical methods in material science and large scale optimization workshops: Final report, June 1, 1995-November 30, 1996

    SciTech Connect

    Friedman, A.

    1996-12-01

    The summer program in Large Scale Optimization concentrated largely on process engineering, aerospace engineering, inverse problems and optimal design, and molecular structure and protein folding. The program brought together application people, optimizers, and mathematicians with interest in learning about these topics. Three proceedings volumes are being prepared. The year in Materials Sciences deals with disordered media and percolation, phase transformations, composite materials, microstructure; topological and geometric methods as well as statistical mechanics approach to polymers (included were Monte Carlo simulation for polymers); miscellaneous other topics such as nonlinear optical material, particulate flow, and thin film. All these activities saw strong interaction among material scientists, mathematicians, physicists, and engineers. About 8 proceedings volumes are being prepared.

  17. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  18. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  19. Large scale test simulations using the Virtual Environment for Test Optimization (VETO)

    SciTech Connect

    Klenke, S.E.; Heffelfinger, S.R.; Bell, H.J.; Shierling, C.L.

    1997-10-01

    The Virtual Environment for Test Optimization (VETO) is a set of simulation tools under development at Sandia to enable test engineers to do computer simulations of tests. The tool set utilizes analysis codes and test information to optimize design parameters and to provide an accurate model of the test environment which aides in the maximization of test performance, training, and safety. Previous VETO effort has included the development of two structural dynamics simulation modules that provide design and optimization tools for modal and vibration testing. These modules have allowed test engineers to model and simulate complex laboratory testing, to evaluate dynamic response behavior, and to investigate system testability. Further development of the VETO tool set will address the accurate modeling of large scale field test environments at Sandia. These field test environments provide weapon system certification capabilities and have different simulation requirements than those of laboratory testing.

  20. Integration of Large-Scale Optimization and Game Theory for Sustainable Water Quality Management

    NASA Astrophysics Data System (ADS)

    Tsao, J.; Li, J.; Chou, C.; Tung, C.

    2009-12-01

    Sustainable water quality management requires total mass control in pollutant discharge based on both the principles of not exceeding assimilative capacity in a river and equity among generations. The stream assimilative capacity is the carrying capacity of a river for the maximum waste load without violating the water quality standard and the spirit of total mass control is to optimize the waste load allocation in subregions. For the goal of sustainable watershed development, this study will use large-scale optimization theory to optimize the profit, and find the marginal values of loadings as reference of the fair price and then the best way to get the equilibrium by water quality trading for the whole of watershed will be found. On the other hand, game theory plays an important role to maximize both individual and entire profits. This study proves the water quality trading market is available in some situation, and also makes the whole participants get a better outcome.

  1. A new approach for optimal VAR sources planning in large scale electric power systems

    SciTech Connect

    Yingtung Hsiao; Chunchang Liu; Yuanlin Chen . Dept. of Electrical Engineering); Hsiodong Chiang )

    1993-08-01

    This paper presents a new approach for contingency-constrained optimal reactive volt amper (VAR) sources planning in large-scale power systems. Features distinguishing the proposed approach from many of the existing methods include that it allows a more realistic problem formulation, and that it can find the (global) optimal solution. The new problem formulation takes into consideration practical aspects of VAR sources; the load constraints and operational constraints at different load levels. This solution methodology based on simulated annealing determines (1) the location to install VAR sources; (2) the types and sizes of VAR sources to be installed; and (3) the settings of VAR sources at different loading conditions. In order to speed up the solution algorithm, this paper makes a slight modification of the fast decoupled load flow and incorporates it into the solution algorithm. This method is suitable for large-scale power systems and has been tested on several power systems with promising results. Simulation results on the IEEE 30 buses system and the Tai-power 358 buses system are presented.

  2. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  3. Newton iterative methods for large scale nonlinear systems. Progress report, 1992--1993

    SciTech Connect

    Walker, H.F.; Turner, K.

    1993-06-01

    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  4. Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)

    2002-01-01

    In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."

  5. Asymptotically Optimal Transmission Policies for Large-Scale Low-Power Wireless Sensor Networks

    SciTech Connect

    I. Ch. Paschalidis; W. Lai; D. Starobinski

    2007-02-01

    We consider wireless sensor networks with multiple gateways and multiple classes of traffic carrying data generated by different sensory inputs. The objective is to devise joint routing, power control and transmission scheduling policies in order to gather data in the most efficient manner while respecting the needs of different sensing tasks (fairness). We formulate the problem as maximizing the utility of transmissions subject to explicit fairness constraints and propose an efficient decomposition algorithm drawing upon large-scale decomposition ideas in mathematical programming. We show that our algorithm terminates in a finite number of iterations and produces a policy that is asymptotically optimal at low transmission power levels. Furthermore, we establish that the utility maximization problem we consider can, in principle, be solved in polynomial time. Numerical results show that our policy is near-optimal, even at high power levels, and far superior to the best known heuristics at low power levels. We also demonstrate how to adapt our algorithm to accommodate energy constraints and node failures. The approach we introduce can efficiently determine near-optimal transmission policies for dramatically larger problem instances than an alternative enumeration approach.

  6. Optimal short-term scheduling for a large-scale cascaded hydro system

    SciTech Connect

    Piekutowski, M.R.; Litwinowicz, T. ); Frowd, R.J. )

    1994-05-01

    This paper describes a short term hydro generation optimization program that has been developed by the Hydro Electric Commission (HEC) to determine optimal generation schedules and to investigate export and import capabilities of the Tasmanian system under a proposed DC interconnection with mainland Australia. The optimal hydro scheduling problem is formulated as a large scale linear programming algorithm and is solved using a commercially-available linear programming package. The selected objective function requires minimization of the value of energy used by turbines and spilled during the study period. Alternative formulations of the objective function are also discussed. The system model incorporates the following elements: hydro station (turbine efficiency, turbine flow limits, penstock head losses, tailrace elevation and generator losses), hydro system (reservoirs and hydro network: active volume, spillway flow, flow between reservoirs and travel time), and other models including thermal plant and DC link. A valuable by-product of the linear programming solution is system and unit incremental costs which may be used for interchange scheduling and short-term generation dispatch.

  7. A computer package for optimal multi-objective VAR planning in large scale power systems

    SciTech Connect

    Chiang, H.D. . School of Electrical Engineering); Liu, C.C.; Chen, Y.L. . Dept. of Electrical Engineering); Hsiao, Y.T.

    1994-05-01

    This paper presents a simulated annealing based computer package for multi-objective, VAR planning in large scale power systems - SAMVAR. This computer package has three distinct features. First, the optimal VAR planning is reformulated as a constrained, multi-objective, non-differentiable optimization problem. The new formulation considers four different objective functions related to system investment, system operational efficiency, system security and system service quality. The new formulation also takes into consideration load, operation and contingency constraints. Second, it allows both the objective functions and equality and inequality constraints to be non-differentiable; making the problem formulation more realistic. Third, the package employs a two-stage solution algorithm based on an extended simulated annealing technique and the [var epsilon]-constraint method. The first-stage of the solution algorithm uses an extended simulated annealing technique to find a global, non-inferior solution. The results obtained from the first stage provide a basis for planners to prioritize the objective functions such that a primary objective function is chosen and tradeoff tolerances for the other objective functions are set. The primary objective function and the trade-off tolerances are then used to transform the constrained multi-objective optimization problem into a single-objective optimization problem with more constraints by employing the [var epsilon]-constraint method. The second-stage uses the simulated annealing technique to find the global optimal solution. A salient feature of SAMVAR is that it allows planners to find an acceptable, global non-inferior solution for the VAR problem. Simulation results indicate that SAMVAR has the ability to handle the multi-objective VAR planning problem and meet with the planner's requirements.

  8. HGO-based decentralised indirect adaptive fuzzy control for a class of large-scale nonlinear systems

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Shao; Chen, Xiaoxin; Zhou, Shao-Wu; Yu, Ling-Li; Wang, Zheng-Wu

    2012-06-01

    In this article, a novel high gain observer (HGO)-based decentralised indirect adaptive fuzzy controller is developed for a class of uncertain affine large-scale nonlinear systems. By the combination of fuzzy logic systems and an HGO, the state variables are not required to be measurable. The proposed feedback and adaptation mechanisms guarantee that each subsystem is able to adaptively compensate for interconnections and disturbances with unknown bounds. It is ascertained using a singular perturbation method that all the signals of the closed-loop large-scale system stand uniformly ultimately bounded and the tracking errors converge to tunable neighbourhoods of the origin. Simulation results of correlated double inverted pendulums substantiate the effectiveness of the proposed controller.

  9. Optimization and Scalability of an Large-scale Earthquake Simulation Application

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Olsen, K. B.; Hu, Y.; Day, S.; Dalguer, L. A.; Minster, B.; Moore, R.; Zhu, J.; Maechling, P.; Jordan, T.

    2006-12-01

    In 2004, the Southern California Earthquake Center (SCEC) initiated a major large-scale earthquake simulation, called TeraShake. The TeraShake propagated seismic waves across a domain of 600 km by 300 km by 80 km at 200 meter resolution and 1.8 billion grid points, some of the largest and most detailed earthquake simulations of the southern San Andres fault. The TeraShake 1 code is based on a 4th order FD Anelastic Wave Propagation Model (AWM), developed by K. Olsen, using a kinematic source description. The enhanced TeraShake 2 then added a new physics-based dynamic component, with the new capability to very- large scale earthquake simulations. A high 100 m resolution was used to generate a physically realistic earthquake source description for the San Andreas fault. The executions of very-large scale TeraShake 2 simulations with the high-resolution dynamic source used up to 1024 processors on the TeraGrid, adding more than 60 TB of simulation output in the 168 TB SCEC digital library, managed by the SDSC Storage Resource Broker (SRB) at SDSC. The execution of these large simulations requires high levels of expertise and resource coordination. We examine the lessons learned in enabling the execution of the TeraShake application. In particular, we look at challenges imposed for the single-processor optimization of the application performance, optimization of the I/O handling and optimization of the run initialization, and the execution of the data-intensive simulations. The TeraShake code was optimized to improve scalability to 2048 processors, with a parallel efficiency of 84%. Our latest TeraShake simulation sustains 1 Teraflop/s performance, completing a simulation in less than 9 hours on the SDSC Datastar. This is more than 10 times faster than previous TeraShake simulations. Some of the TeraShake production simulations were carried out using grid computing resources, including the execution on NCSA TeraGrid resources, and run-time archiving outputs onto SDSC

  10. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    NASA Astrophysics Data System (ADS)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  11. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time. PMID:24136425

  12. SfM with MRFs: Discrete-Continuous Optimization for Large-Scale Structure from Motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2012-10-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time. PMID:23045369

  13. Optimization and large scale computation of an entropy-based moment closure

    SciTech Connect

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  14. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGESBeta

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used asmore » test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.« less

  15. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  16. Algorithmic enhancements and experience with a large scale SQP code for general nonlinear programming problems

    SciTech Connect

    Boggs, P.; Tolle, J.; Kearsley, A.

    1994-12-31

    We have developed a large scale sequential quadratic programming (SQP) code based on an interior-point method for solving general (convex or nonconvex) quadratic programs (QP). We often halt this QP solver prematurely by employing a trust-region strategy. This procedure typically reduces the overall cost of the code. In this talk we briefly review the algorithm and some of its theoretical justification and then discuss recent enhancements including automatic procedures for both increasing and decreasing the parameter in the merit function, a regularization procedure for dealing with linearly dependent active constraint gradients, and a method for modifying the linearized equality constraints. Some numerical results on a significant set of {open_quotes}real-world{close_quotes} problems will be presented.

  17. Assessment of economically optimal water management and geospatial potential for large-scale water storage

    NASA Astrophysics Data System (ADS)

    Weerasinghe, Harshi; Schneider, Uwe A.

    2010-05-01

    Assessment of economically optimal water management and geospatial potential for large-scale water storage Weerasinghe, Harshi; Schneider, Uwe A Water is an essential but limited and vulnerable resource for all socio-economic development and for maintaining healthy ecosystems. Water scarcity accelerated due to population expansion, improved living standards, and rapid growth in economic activities, has profound environmental and social implications. These include severe environmental degradation, declining groundwater levels, and increasing problems of water conflicts. Water scarcity is predicted to be one of the key factors limiting development in the 21st century. Climate scientists have projected spatial and temporal changes in precipitation and changes in the probability of intense floods and droughts in the future. As scarcity of accessible and usable water increases, demand for efficient water management and adaptation strategies increases as well. Addressing water scarcity requires an intersectoral and multidisciplinary approach in managing water resources. This would in return safeguard the social welfare and the economical benefit to be at their optimal balance without compromising the sustainability of ecosystems. This paper presents a geographically explicit method to assess the potential for water storage with reservoirs and a dynamic model that identifies the dimensions and material requirements under an economically optimal water management plan. The methodology is applied to the Elbe and Nile river basins. Input data for geospatial analysis at watershed level are taken from global data repositories and include data on elevation, rainfall, soil texture, soil depth, drainage, land use and land cover; which are then downscaled to 1km spatial resolution. Runoff potential for different combinations of land use and hydraulic soil groups and for mean annual precipitation levels are derived by the SCS-CN method. Using the overlay and decision tree algorithms

  18. The optimization of large-scale density gradient isolation of human islets.

    PubMed

    Robertson, G S; Chadwick, D R; Contractor, H; James, R F; London, N J

    1993-01-01

    The use of the COBE 2991 cell processor (COBE Laboratories, Colorado) for large-scale islet purification using discontinuous density gradients has been widely adopted. It minimizes many of the problems such as wall effects, normally encountered during centrifugation, and avoids the vortexing at interfaces that occurs during acceleration and deceleration by allowing the gradient to be formed and the islet-containing interface to be collected while continuing to spin. We have produced cross-sectional profiles of the 2991 bag during spinning which allow the area of interfaces in such step gradients to be calculated. This allows the volumes of the gradient media layers loaded on the machine to be adjusted in order to maximize the area of the gradient interfaces. However, even using the maximal areas possible (144.5 cm2), clogging of tissue at such interfaces limits the volume of digest which can be separated on one gradient to 15 ml. We have shown that a linear continuous density gradient can be produced within the 2991 bag, that allows as much as 40 ml of digest to be successfully purified. Such a system combines the intrinsic advantages of the 2991 with those of continuous density gradients and provides the optimal method for density-dependent islet purification. PMID:8219265

  19. Optimization of Cluster Heads for Energy Efficiency in Large-Scale Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Gu, Yi; Wu, Qishi

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting nodes in a pre-deployed sensor network to be the cluster heads to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k-means algorithm.

  20. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE PAGESBeta

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster headsmore » to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  1. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  2. Characterizing the nonlinear growth of large-scale structure in the Universe

    PubMed

    Coles; Chiang

    2000-07-27

    The local Universe displays a rich hierarchical pattern of galaxy clusters and superclusters. The early Universe, however, was almost smooth, with only slight 'ripples' as seen in the cosmic microwave background radiation. Models of the evolution of cosmic structure link these observations through the effect of gravity, because the small initially overdense fluctuations are predicted to attract additional mass as the Universe expands. During the early stages of this expansion, the ripples evolve independently, like linear waves on the surface of deep water. As the structures grow in mass, they interact with each other in nonlinear ways, more like waves breaking in shallow water. We have recently shown how cosmic structure can be characterized by phase correlations associated with these nonlinear interactions, but it was not clear how to use that information to obtain quantitative insights into the growth of structures. Here we report a method of revealing phase information, and show quantitatively how this relates to the formation of filaments, sheets and clusters of galaxies by nonlinear collapse. We develop a statistical method based on information entropy to separate linear from nonlinear effects, and thereby are able to disentangle those aspects of galaxy clustering that arise from initial conditions (the ripples) from the subsequent dynamical evolution. PMID:10935627

  3. Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    NASA Astrophysics Data System (ADS)

    Kitaura, F. S.; Enßlin, T. A.

    2008-09-01

    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.

  4. Library for Nonlinear Optimization

    Energy Science and Technology Software Center (ESTSC)

    2001-10-09

    OPT++ is a C++ object-oriented library for nonlinear optimization. This incorporates an improved implementation of an existing capability and two new algorithmic capabilities based on existing journal articles and freely available software.

  5. Adaptive fuzzy decentralised fault-tolerant control for nonlinear large-scale systems with actuator failures and unmodelled dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Yinyin; Tong, Shaocheng; Li, Yongming

    2015-09-01

    This paper discusses the adaptive fuzzy decentralised fault-tolerant control (FTC) problem for a class of nonlinear large-scale systems in strict-feedback form. The systems under study contain the unknown nonlinearities, unmodelled dynamics, actuator faults and without the direct measurements of state variables. With the help of fuzzy logic systems identifying the unknown functions and a fuzzy adaptive observer is designed to estimate the unmeasured states. By using the backstepping design technique and the dynamic surface control approach and combining with the changing supply function technique, a fuzzy adaptive FTC scheme is developed. The main features of the proposed control approach are that it can guarantee the closed-loop system to be input-to-state practically stable, and also has the robustness to the unmodelled dynamics. Moreover, it can overcome the so-called problem of 'explosion of complexity' existing in the previous literature. Finally, simulation studies are provided to illustrate the effectiveness of the proposed approach.

  6. Characteristic-based non-linear simulation of large-scale standing-wave thermoacoustic engine.

    PubMed

    Abd El-Rahman, Ahmed I; Abdel-Rahman, Ehab

    2014-08-01

    A few linear theories [Swift, J. Acoust. Soc. Am. 84(4), 1145-1180 (1988); Swift, J. Acoust. Soc. Am. 92(3), 1551-1563 (1992); Olson and Swift, J. Acoust. Soc. Am. 95(3), 1405-1412 (1994)] and numerical models, based on low-Mach number analysis [Worlikar and Knio, J. Comput. Phys. 127(2), 424-451 (1996); Worlikar et al., J. Comput. Phys. 144(2), 199-324 (1996); Hireche et al., Canadian Acoust. 36(3), 164-165 (2008)], describe the flow dynamics of standing-wave thermoacoustic engines, but almost no simulation results are available that enable the prediction of the behavior of practical engines experiencing significant temperature gradient between the stack ends and thus producing large-amplitude oscillations. Here, a one-dimensional non-linear numerical simulation based on the method of characteristics to solve the unsteady compressible Euler equations is reported. Formulation of the governing equations, implementation of the numerical method, and application of the appropriate boundary conditions are presented. The calculation uses explicit time integration along with deduced relationships, expressing the friction coefficient and the Stanton number for oscillating flow inside circular ducts. Helium, a mixture of Helium and Argon, and Neon are used for system operation at mean pressures of 13.8, 9.9, and 7.0 bars, respectively. The self-induced pressure oscillations are accurately captured in the time domain, and then transferred into the frequency domain, distinguishing the pressure signals into fundamental and harmonic responses. The results obtained are compared with reported experimental works [Swift, J. Acoust. Soc. Am. 92(3), 1551-1563 (1992); Olson and Swift, J. Acoust. Soc. Am. 95(3), 1405-1412 (1994)] and the linear theory, showing better agreement with the measured values, particularly in the non-linear regime of the dynamic pressure response. PMID:25096100

  7. Reduction of Large-scale Turbulence and Optimization of Flows in the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Taylor, N. Z.

    2011-10-01

    large-scale flow.

  8. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    NASA Astrophysics Data System (ADS)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to

  9. Dependency-tracking object-oriented multidisciplinary design optimization (MDO) formulation on a large-scale system

    NASA Astrophysics Data System (ADS)

    Ahlqvist, Maria Alexandra

    2001-12-01

    Advances in computer technology and analysis software are making optimization of engineering systems more attractive and affordable than ever before. Optimization is becoming a necessary tool in order for companies to stay competitive. While the concept of optimization has been known almost as long as mankind, specific procedures on how to optimize engineering systems are younger. Currently, efforts are being made to reduce the computational time and simplify the organizational complexity involved with solving multidisciplinary systems. The work presented in this dissertation deals with how an object-oriented, dependency-tracking, demand-driven language can be used in reducing the computational time in performing multidisciplinary design optimizations. The work also discusses how the object-oriented language was used in integrating optimization functionality with a missile design system. The object-oriented dependency-tracking demand-driven language is applied to a large-scale multidisciplinary missile system involving disciplines such as a geometry engine, weight analysis, propulsion, aerodynamics, trajectory analysis, and cost analysis. Also discussed is the need for using approximations in optimizing a large-scale system. Designed experiments and response surface techniques were employed in creating approximation models for the problem at hand. Using these approximations to evaluate the responses was found to be useful at points in the design space where one or more responses could not otherwise be evaluated. Different optimization schemes were studied including response surface analysis of different resolutions in conjunction with higher fidelity optimization and higher fidelity optimization without approximation models. The contributions of this work are the application of MDO capabilities to a large-scale missile design system modeled in an object-oriented dependency-tracking environment, the use of response surface approximations to fit areas in the design

  10. Modulational stability of weakly nonlinear wave-trains in media with small- and large-scale dispersions

    NASA Astrophysics Data System (ADS)

    Nikitenkova, S.; Singh, N.; Stepanyants, Y.

    2015-12-01

    In this paper, we revisit the problem of modulation stability of quasi-monochromatic wave-trains propagating in a media with the double dispersion occurring both at small and large wavenumbers. We start with the shallow-water equations derived by Shrira [Izv., Acad. Sci., USSR, Atmos. Ocean. Phys. (Engl. Transl.) 17, 55-59 (1981)] which describes both surface and internal long waves in a rotating fluid. The small-scale (Boussinesq-type) dispersion is assumed to be weak, whereas the large-scale (Coriolis-type) dispersion is considered as without any restriction. For unidirectional waves propagating in one direction, only the considered set of equations reduces to the Gardner-Ostrovsky equation which is applicable only within a finite range of wavenumbers. We derive the nonlinear Schrödinger equation (NLSE) which describes the evolution of narrow-band wave-trains and show that within a more general bi-directional equation the wave-trains, similar to that derived from the Ostrovsky equation, are also modulationally stable at relatively small wavenumbers k < kc and unstable at k > kc, where kc is some critical wavenumber. The NLSE derived here has a wider range of applicability: it is valid for arbitrarily small wavenumbers. We present the analysis of coefficients of the NLSE for different signs of coefficients of the governing equation and compare them with those derived from the Ostrovsky equation. The analysis shows that for weakly dispersive waves in the range of parameters where the Gardner-Ostrovsky equation is valid, the cubic nonlinearity does not contribute to the nonlinear coefficient of NLSE; therefore, the NLSE can be correctly derived from the Ostrovsky equation.

  11. Modulational stability of weakly nonlinear wave-trains in media with small- and large-scale dispersions.

    PubMed

    Nikitenkova, S; Singh, N; Stepanyants, Y

    2015-12-01

    In this paper, we revisit the problem of modulation stability of quasi-monochromatic wave-trains propagating in a media with the double dispersion occurring both at small and large wavenumbers. We start with the shallow-water equations derived by Shrira [Izv., Acad. Sci., USSR, Atmos. Ocean. Phys. (Engl. Transl.) 17, 55-59 (1981)] which describes both surface and internal long waves in a rotating fluid. The small-scale (Boussinesq-type) dispersion is assumed to be weak, whereas the large-scale (Coriolis-type) dispersion is considered as without any restriction. For unidirectional waves propagating in one direction, only the considered set of equations reduces to the Gardner-Ostrovsky equation which is applicable only within a finite range of wavenumbers. We derive the nonlinear Schrödinger equation (NLSE) which describes the evolution of narrow-band wave-trains and show that within a more general bi-directional equation the wave-trains, similar to that derived from the Ostrovsky equation, are also modulationally stable at relatively small wavenumbers k < kc and unstable at k > kc, where kc is some critical wavenumber. The NLSE derived here has a wider range of applicability: it is valid for arbitrarily small wavenumbers. We present the analysis of coefficients of the NLSE for different signs of coefficients of the governing equation and compare them with those derived from the Ostrovsky equation. The analysis shows that for weakly dispersive waves in the range of parameters where the Gardner-Ostrovsky equation is valid, the cubic nonlinearity does not contribute to the nonlinear coefficient of NLSE; therefore, the NLSE can be correctly derived from the Ostrovsky equation. PMID:26723152

  12. Optimization of large-scale mouse brain connectome via joint evaluation of DTI and neuron tracing data.

    PubMed

    Chen, Hanbo; Liu, Tao; Zhao, Yu; Zhang, Tuo; Li, Yujie; Li, Meng; Zhang, Hongmiao; Kuang, Hui; Guo, Lei; Tsien, Joe Z; Liu, Tianming

    2015-07-15

    Tractography based on diffusion tensor imaging (DTI) data has been used as a tool by a large number of recent studies to investigate structural connectome. Despite its great success in offering unique 3D neuroanatomy information, DTI is an indirect observation with limited resolution and accuracy and its reliability is still unclear. Thus, it is essential to answer this fundamental question: how reliable is DTI tractography in constructing large-scale connectome? To answer this question, we employed neuron tracing data of 1772 experiments on the mouse brain released by the Allen Mouse Brain Connectivity Atlas (AMCA) as the ground-truth to assess the performance of DTI tractography in inferring white matter fiber pathways and inter-regional connections. For the first time in the neuroimaging field, the performance of whole brain DTI tractography in constructing a large-scale connectome has been evaluated by comparison with tracing data. Our results suggested that only with the optimized tractography parameters and the appropriate scale of brain parcellation scheme, can DTI produce relatively reliable fiber pathways and a large-scale connectome. Meanwhile, a considerable amount of errors were also identified in optimized DTI tractography results, which we believe could be potentially alleviated by efforts in developing better DTI tractography approaches. In this scenario, our framework could serve as a reliable and quantitative test bed to identify errors in tractography results which will facilitate the development of such novel tractography algorithms and the selection of optimal parameters. PMID:25953631

  13. Fault diagnosis of nonlinear and large-scale processes using novel modified kernel Fisher discriminant analysis approach

    NASA Astrophysics Data System (ADS)

    Shi, Huaitao; Liu, Jianchang; Wu, Yuhou; Zhang, Ke; Zhang, Lixiu; Xue, Peng

    2016-04-01

    It is pretty significant for fault diagnosis timely and accurately to improve the dependability of industrial processes. In this study, fault diagnosis of nonlinear and large-scale processes by variable-weighted kernel Fisher discriminant analysis (KFDA) based on improved biogeography-based optimisation (IBBO) is proposed, referred to as IBBO-KFDA, where IBBO is used to determine the parameters of variable-weighted KFDA, and variable-weighted KFDA is used to solve the multi-classification overlapping problem. The main contributions of this work are four-fold to further improve the performance of KFDA for fault diagnosis. First, a nonlinear fault diagnosis approach with variable-weighted KFDA is developed for maximising separation between the overlapping fault samples. Second, kernel parameters and features selection of variable-weighted KFDA are simultaneously optimised using IBBO. Finally, a single fitness function that combines erroneous diagnosis rate with feature cost is created, a novel mixed kernel function is introduced to improve the classification capability in the feature space and diagnosis accuracy of the IBBO-KFDA, and serves as the target function in the optimisation problem. Moreover, an IBBO approach is developed to obtain the better quality of solution and faster convergence speed. On the one hand, the proposed IBBO-KFDA method is first used on Tennessee Eastman process benchmark data sets to validate the feasibility and efficiency. On the other hand, IBBO-KFDA is applied to diagnose faults of automation gauge control system. Simulation results demonstrate that IBBO-KFDA can obtain better kernel parameters and feature vectors with a lower computing cost, higher diagnosis accuracy and a better real-time capacity.

  14. Strategic optimization of large-scale vertical closed-loop shallow geothermal systems

    NASA Astrophysics Data System (ADS)

    Hecht-Méndez, J.; de Paly, M.; Beck, M.; Blum, P.; Bayer, P.

    2012-04-01

    Vertical closed-loop geothermal systems or ground source heat pump (GSHP) systems with multiple vertical borehole heat exchangers (BHEs) are attractive technologies that provide heating and cooling to large facilities such as hotels, schools, big office buildings or district heating systems. Currently, the worldwide number of installed systems shows a recurrent increase. By running arrays of multiple BHEs, the energy demand of a given facility is fulfilled by exchanging heat with the ground. Due to practical and technical reasons, square arrays of the BHEs are commonly used and the total energy extraction from the subsurface is accomplished by an equal operation of each BHE. Moreover, standard designing practices disregard the presence of groundwater flow. We present a simulation-optimization approach that is able to regulate the individual operation of multiple BHEs, depending on the given hydro-geothermal conditions. The developed approach optimizes the overall performance of the geothermal system while mitigating the environmental impact. As an example, a synthetic case with a geothermal system using 25 BHEs for supplying a seasonal heating energy demand is defined. The optimization approach is evaluated for finding optimal energy extractions for 15 scenarios with different specific constant groundwater flow velocities. Ground temperature development is simulated using the optimal energy extractions and contrasted against standard application. It is demonstrated that optimized systems always level the ground temperature distribution and generate smaller subsurface temperature changes than non-optimized ones. Mean underground temperature changes within the studied BHE field are between 13% and 24% smaller when the optimized system is used. By applying the optimized energy extraction patterns, the temperature of the heat carrier fluid in the BHE, which controls the overall performance of the system, can also be raised by more than 1 °C.

  15. Large-scale regionalization of water table depth in peatlands optimized for greenhouse gas emission upscaling

    NASA Astrophysics Data System (ADS)

    Bechtold, M.; Tiemeyer, B.; Laggner, A.; Leppelt, T.; Frahm, E.; Belting, S.

    2014-04-01

    Fluxes of the three main greenhouse gases (GHG) CO2, CH4 and N2O from peat and other organic soils are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new dataset comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip well specific long-term annual mean water level (WL) as well as a transformed form of it (WLt). The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insights into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and that predictors with

  16. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    NASA Astrophysics Data System (ADS)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  17. Large-Scale Multi-Objective Optimization for the Management of Seawater Intrusion, Santa Barbara, CA

    NASA Astrophysics Data System (ADS)

    Stanko, Z. P.; Nishikawa, T.; Paulinski, S. R.

    2015-12-01

    The City of Santa Barbara, located in coastal southern California, is concerned that excessive groundwater pumping will lead to chloride (Cl) contamination of its groundwater system from seawater intrusion (SWI). In addition, the city wishes to estimate the effect of continued pumping on the groundwater basin under a variety of initial and climatic conditions. A SEAWAT-based groundwater-flow and solute-transport model of the Santa Barbara groundwater basin was optimized to produce optimal pumping schedules assuming 5 different scenarios. Borg, a multi-objective genetic algorithm, was coupled with the SEAWAT model to identify optimal management strategies. The optimization problems were formulated as multi-objective so that the tradeoffs between maximizing pumping, minimizing SWI, and minimizing drawdowns can be examined by the city. Decisions can then be made on a pumping schedule in light of current preferences and climatic conditions. Borg was used to produce Pareto optimal results for all 5 scenarios, which vary in their initial conditions (high water levels, low water levels, or current basin state), simulated climate (normal or drought conditions), and problem formulation (objective equations and decision-variable aggregation). Results show mostly well-defined Pareto surfaces with a few singularities. Furthermore, the results identify the precise pumping schedule per well that was suitable given the desired restriction on drawdown and Cl concentrations. A system of decision-making is then possible based on various observations of the basin's hydrologic states and climatic trends without having to run any further optimizations. In addition, an assessment of selected Pareto-optimal solutions was analyzed with sensitivity information using the simulation model alone. A wide range of possible groundwater pumping scenarios is available and depends heavily on the future climate scenarios and the Pareto-optimal solution selected while managing the pumping wells.

  18. Large-scale regionalization of water table depth in peatlands optimized for greenhouse gas emission upscaling

    NASA Astrophysics Data System (ADS)

    Bechtold, M.; Tiemeyer, B.; Laggner, A.; Leppelt, T.; Frahm, E.; Belting, S.

    2014-09-01

    Fluxes of the three main greenhouse gases (GHG) CO2, CH4 and N2O from peat and other soils with high organic carbon contents are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new data set comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip-well-specific long-term annual mean water level (WL) as well as a transformed form (WLt). The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insight into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and

  19. Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization

    NASA Astrophysics Data System (ADS)

    Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar

    2016-07-01

    Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.

  20. Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.

  1. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGESBeta

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  2. Research on transformation and optimization of large scale 3D modeling for real time rendering

    NASA Astrophysics Data System (ADS)

    Yan, Hu; Yang, Yongchao; Zhao, Gang; He, Bin; Shen, Guosheng

    2011-12-01

    During the simulation process of real-time three-dimensional scene, the popular modeling software and the real-time rendering platform are not compatible. The common solution is to create three-dimensional scene model by using modeling software and then transform the format supported by rendering platform. This paper takes digital campus scene simulation as an example, analyzes and solves the problems of surface loss; texture distortion and loss; model flicker and so on during the transformation from 3Ds Max to MultiGen Creator. Besides, it proposes the optimization strategy of model which is transformed. The operation results show that this strategy is a good solution to all kinds of problems existing in transformation and it can speed up the rendering speed of the model.

  3. Numerical solution of nonlinear algebraic equations in stiff ODE solving (1986--89)---Quasi-Newton updating for large scale nonlinear systems (1989--90)

    SciTech Connect

    Walker, H.F.

    1990-01-01

    During the 1986--1989 project period, two major areas of research developed into which most of the work fell: matrix-free'' methods for solving linear systems, by which we mean iterative methods that require only the action of the coefficient matrix on vectors and not the coefficient matrix itself, and Newton-like methods for underdetermined nonlinear systems. In the 1990 project period of the renewal grant, a third major area of research developed: inexact Newton and Newton iterative methods and their applications to large-scale nonlinear systems, especially those arising in discretized problems. An inexact Newton method is any method in which each step reduces the norm of the local linear model of the function of interest. A Newton iterative method is any implementation of Newton's method in which the linear systems that characterize Newton steps (the Newton equations'') are solved only approximately using an iterative linear solver. Newton iterative methods are properly considered special cases of inexact Newton methods. We describe the work in these areas and in other areas in this paper.

  4. Weighted modularity optimization for crisp and fuzzy community detection in large-scale networks

    NASA Astrophysics Data System (ADS)

    Cao, Jie; Bu, Zhan; Gao, Guangliang; Tao, Haicheng

    2016-11-01

    Community detection is a classic and very difficult task in the field of complex network analysis, principally for its applications in domains such as social or biological networks analysis. One of the most widely used technologies for community detection in networks is the maximization of the quality function known as modularity. However, existing work has proved that modularity maximization algorithms for community detection may fail to resolve communities in small size. Here we present a new community detection method, which is able to find crisp and fuzzy communities in undirected and unweighted networks by maximizing weighted modularity. The algorithm derives new edge weights using the cosine similarity in order to go around the resolution limit problem. Then a new local moving heuristic based on weighted modularity optimization is proposed to cluster the updated network. Finally, the set of potentially attractive clusters for each node is computed, to further uncover the crisply fuzzy partition of the network. We give demonstrative applications of the algorithm to a set of synthetic benchmark networks and six real-world networks and find that it outperforms the current state of the art proposals (even those aimed at finding overlapping communities) in terms of quality and scalability.

  5. Large scale optimization of beam weights under dose-volume restrictions.

    PubMed

    Langer, M; Brown, R; Urie, M; Leong, J; Stracher, M; Shapiro, J

    1990-04-01

    The problem of choosing weights for beams in a multifield plan which maximizes tumor dose under conditions that recognize the volume dependence of organ tolerance to radiation is considered, and its solution described. Structures are modelled as collections of discrete points, and the weighting problem described as a combinatorial linear program (LP). The combinatorial LP is solved as a mixed 0/1 integer program with appropriate restrictions on normal tissue dose. The method is illustrated through the assignment of weights to a set of 10 beams incident on a pelvic target. Dose-volume restrictions are placed on surrounding bowel, bladder, and rectum, and a limit placed on tumor dose inhomogeneity. Different tolerance restrictions are examined, so that the sensitivity of the target dose to changes in the normal tissue constraints may be explored. It is shown that the distributions obtained satisfy the posed constraints. The technique permits formal solution of the optimization problem, in a time short enough to meet the needs of treatment planners. PMID:2323977

  6. Optimization of culture media for large-scale lutein production by heterotrophic Chlorella vulgaris.

    PubMed

    Jeon, Jin Young; Kwon, Ji-Sue; Kang, Soon Tae; Kim, Bo-Ra; Jung, Yuchul; Han, Jae Gap; Park, Joon Hyun; Hwang, Jae Kwan

    2014-01-01

    Lutein is a carotenoid with a purported role in protecting eyes from oxidative stress, particularly the high-energy photons of blue light. Statistical optimization was performed to growth media that supports a higher production of lutein by heterotrophically cultivated Chlorella vulgaris. The effect of media composition of C. vulgaris on lutein was examined using fractional factorial design (FFD) and central composite design (CCD). The results indicated that the presence of magnesium sulfate, EDTA-2Na, and trace metal solution significantly affected lutein production. The optimum concentrations for lutein production were found to be 0.34 g/L, 0.06 g/L, and 0.4 mL/L for MgSO4 ·7H2 O, EDTA-2Na, and trace metal solution, respectively. These values were validated using a 5-L jar fermenter. Lutein concentration was increased by almost 80% (139.64 ± 12.88 mg/L to 252.75 ± 12.92 mg/L) after 4 days. Moreover, the lutein concentration was not reduced as the cultivation was scaled up to 25,000 L (260.55 ± 3.23 mg/L) and 240,000 L (263.13 ± 2.72 mg/L). These observations suggest C. vulgaris as a potential lutein source. PMID:24550199

  7. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    NASA Astrophysics Data System (ADS)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  8. A LARGE-SCALE SIMULATION OF INTERNATIONAL MARITIME CONTAINER SHIPPING CONSIDERING OPTIMAL BEHAVIOR OF SHIPPERS AND OCEANGOING CARRIERS

    NASA Astrophysics Data System (ADS)

    Shibasaki, Ryuichi; Watanabe, Tomihiro; Ieda, Hitoshi

    This paper develops a large-scale simulation model of international maritime container shipping industry considering optimal behaviors of both shippers and oceangoing carriers, in order to measure impact of port and international logistics policies for each country including Japan. Concretely, the authors develop a short-term model (income maximization model of carriers) including shippers' choice of carrier when maritime cargo shipping demand between ports are given and a mid-term model (Nash equilibrium model of shippers and carriers) including shippers' choice of import/export port and route of hinterland transport and carriers' profit maximization behavior when cargo shipping demand between regions are given. The developed model is applied to the actual large-scale international maritime container shipping network in Eastern Asia. From a trial calculation based on the actual cargo shipping demand, the performance of the model is validated in terms of convergency and reproducibility. Also, the sensitivity of the model output taking the actual port policies into account is confirmed.

  9. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network

  10. Optimization of large-scale culture conditions for the production of cordycepin with Cordyceps militaris by liquid static culture.

    PubMed

    Kang, Chao; Wen, Ting-Chi; Kang, Ji-Chuan; Meng, Ze-Bing; Li, Guang-Rong; Hyde, Kevin D

    2014-01-01

    Cordycepin is one of the most important bioactive compounds produced by species of Cordyceps sensu lato, but it is hard to produce large amounts of this substance in industrial production. In this work, single factor design, Plackett-Burman design, and central composite design were employed to establish the key factors and identify optimal culture conditions which improved cordycepin production. Using these culture conditions, a maximum production of cordycepin was 2008.48 mg/L for 700 mL working volume in the 1000 mL glass jars and total content of cordycepin reached 1405.94 mg/bottle. This method provides an effective way for increasing the cordycepin production at a large scale. The strategies used in this study could have a wide application in other fermentation processes. PMID:25054182

  11. Optimization of Large-Scale Culture Conditions for the Production of Cordycepin with Cordyceps militaris by Liquid Static Culture

    PubMed Central

    Kang, Chao; Wen, Ting-Chi; Kang, Ji-Chuan; Meng, Ze-Bing; Li, Guang-Rong; Hyde, Kevin D.

    2014-01-01

    Cordycepin is one of the most important bioactive compounds produced by species of Cordyceps sensu lato, but it is hard to produce large amounts of this substance in industrial production. In this work, single factor design, Plackett-Burman design, and central composite design were employed to establish the key factors and identify optimal culture conditions which improved cordycepin production. Using these culture conditions, a maximum production of cordycepin was 2008.48 mg/L for 700 mL working volume in the 1000 mL glass jars and total content of cordycepin reached 1405.94 mg/bottle. This method provides an effective way for increasing the cordycepin production at a large scale. The strategies used in this study could have a wide application in other fermentation processes. PMID:25054182

  12. Optimized circulation and weather type classifications relating large-scale atmospheric conditions to local PM10 concentrations in Bavaria

    NASA Astrophysics Data System (ADS)

    Weitnauer, C.; Beck, C.; Jacobeit, J.

    2013-12-01

    In the last decades the critical increase of the emission of air pollutants like nitrogen dioxide, sulfur oxides and particulate matter especially in urban areas has become a problem for the environment as well as human health. Several studies confirm a risk of high concentration episodes of particulate matter with an aerodynamic diameter < 10 μm (PM10) for the respiratory tract or cardiovascular diseases. Furthermore it is known that local meteorological and large scale atmospheric conditions are important influencing factors on local PM10 concentrations. With climate changing rapidly, these connections need to be better understood in order to provide estimates of climate change related consequences for air quality management purposes. For quantifying the link between large-scale atmospheric conditions and local PM10 concentrations circulation- and weather type classifications are used in a number of studies by using different statistical approaches. Thus far only few systematic attempts have been made to modify consisting or to develop new weather- and circulation type classifications in order to improve their ability to resolve local PM10 concentrations. In this contribution existing weather- and circulation type classifications, performed on daily 2.5 x 2.5 gridded parameters of the NCEP/NCAR reanalysis data set, are optimized with regard to their discriminative power for local PM10 concentrations at 49 Bavarian measurement sites for the period 1980 to 2011. Most of the PM10 stations are situated in urban areas covering urban background, traffic and industry related pollution regimes. The range of regimes is extended by a few rural background stations. To characterize the correspondence between the PM10 measurements of the different stations by spatial patterns, a regionalization by an s-mode principal component analysis is realized on the high-pass filtered data. The optimization of the circulation- and weather types is implemented using two representative

  13. Assimilation of satellite data to optimize large-scale hydrological model parameters: a case study for the SWOT mission

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.

    2014-11-01

    During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 x 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large-scale river routing models. The method consists in applying a data assimilation approach, the extended Kalman filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA (Interactions between Soil, Biosphere, and Atmosphere)-TRIP (Total Runoff Integrating Pathways) continental hydrologic system. Parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which leads to significant errors at reach and large scales. The current study focuses on the Niger Basin, a transboundary river. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an observing system simulation experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning coefficients are then supposed to be known and are used to generate synthetic SWOT observations over the period 2002-2003. The impact of the assimilation system on the Niger Basin hydrological cycle is then quantified. The optimization of the Manning coefficient using the EKF (extended Kalman filter) algorithm over an 18-month period led to a significant improvement of the river water levels. The relative bias of the water level is globally improved (a 30

  14. Assessing Impact of Large-Scale Distributed Residential HVAC Control Optimization on Electricity Grid Operation and Renewable Energy Integration

    NASA Astrophysics Data System (ADS)

    Corbin, Charles D.

    Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite a growing interest in residential demand management. This work explores the potential for residential buildings to shape electric demand at the distribution feeder level in order to reduce peak demand, reduce system ramping, and increase load factor using detailed sub-hourly simulations of thousands of buildings coupled to distribution power flow software. More generally, this work develops a methodology for the directed optimization of residential HVAC operation using a distributed but directed MPC scheme that can be applied to today's programmable thermostat technologies to address the increasing variability in electric supply and demand. Case studies incorporating varying levels of renewable energy generation demonstrate the approach and highlight important considerations for large-scale residential model predictive control.

  15. Optimization of a large-scale gene disruption protocol in Dictyostelium and analysis of conserved genes of unknown function

    PubMed Central

    Torija, Patricia; Robles, Alicia; Escalante, Ricardo

    2006-01-01

    Background Development of the post-genomic age in Dictyostelium will require the existence of rapid and reliable methods to disrupt genes that would allow the analysis of entire gene families and perhaps the possibility to undertake the complete knock-out analysis of all the protein-coding genes present in Dictyostelium genome. Results Here we present an optimized protocol based on the previously described construction of gene disruption vectors by in vitro transposition. Our method allows a rapid selection of the construct by a simple PCR approach and subsequent sequencing. Disruption constructs were amplified by PCR and the products were directly transformed in Dictyostelium cells. The selection of homologous recombination events was also performed by PCR. We have constructed 41 disruption vectors to target genes of unknown function, highly conserved between Dictyostelium and human, but absent from the genomes of S. cerevisiae and S. pombe. 28 genes were successfully disrupted. Conclusion This is the first step towards the understanding of the function of these conserved genes and exemplifies the easiness to undertake large-scale disruption analysis in Dictyostelium. PMID:16945142

  16. Analysis of the electricity demand of Greece for optimal planning of a large-scale hybrid renewable energy system

    NASA Astrophysics Data System (ADS)

    Tyralis, Hristos; Karakatsanis, Georgios; Tzouka, Katerina; Mamassis, Nikos

    2015-04-01

    The Greek electricity system is examined for the period 2002-2014. The demand load data are analysed at various time scales (hourly, daily, seasonal and annual) and they are related to the mean daily temperature and the gross domestic product (GDP) of Greece for the same time period. The prediction of energy demand, a product of the Greek Independent Power Transmission Operator, is also compared with the demand load. Interesting results about the change of the electricity demand scheme after the year 2010 are derived. This change is related to the decrease of the GDP, during the period 2010-2014. The results of the analysis will be used in the development of an energy forecasting system which will be a part of a framework for optimal planning of a large-scale hybrid renewable energy system in which hydropower plays the dominant role. Acknowledgement: This research was funded by the Greek General Secretariat for Research and Technology through the research project Combined REnewable Systems for Sustainable ENergy DevelOpment (CRESSENDO; grant number 5145)

  17. Dynamic multi-swarm particle swarm optimizer using parallel PC cluster systems for global optimization of large-scale multimodal functions

    NASA Astrophysics Data System (ADS)

    Fan, Shu-Kai S.; Chang, Ju-Ming

    2010-05-01

    This article presents a novel parallel multi-swarm optimization (PMSO) algorithm with the aim of enhancing the search ability of standard single-swarm PSOs for global optimization of very large-scale multimodal functions. Different from the existing multi-swarm structures, the multiple swarms work in parallel, and the search space is partitioned evenly and dynamically assigned in a weighted manner via the roulette wheel selection (RWS) mechanism. This parallel, distributed framework of the PMSO algorithm is developed based on a master-slave paradigm, which is implemented on a cluster of PCs using message passing interface (MPI) for information interchange among swarms. The PMSO algorithm handles multiple swarms simultaneously and each swarm performs PSO operations of its own independently. In particular, one swarm is designated for global search and the others are for local search. The first part of the experimental comparison is made among the PMSO, standard PSO, and two state-of-the-art algorithms (CTSS and CLPSO) in terms of various un-rotated and rotated benchmark functions taken from the literature. In the second part, the proposed multi-swarm algorithm is tested on large-scale multimodal benchmark functions up to 300 dimensions. The results of the PMSO algorithm show great promise in solving high-dimensional problems.

  18. Experimental validation of computational models for large-scale nonlinear ultrasound simulations in heterogeneous, absorbing fluid media

    NASA Astrophysics Data System (ADS)

    Martin, Elly; Treeby, Bradley E.

    2015-10-01

    To increase the effectiveness of high intensity focused ultrasound (HIFU) treatments, prediction of ultrasound propagation in biological tissues is essential, particularly where bones are present in the field. This requires complex full-wave computational models which account for nonlinearity, absorption, and heterogeneity. These models must be properly validated but there is a lack of analytical solutions which apply in these conditions. Experimental validation of the models is therefore essential. However, accurate measurement of HIFU fields is not trivial. Our aim is to establish rigorous methods for obtaining reference data sets with which to validate tissue realistic simulations of ultrasound propagation. Here, we present preliminary measurements which form an initial validation of simulations performed using the k-Wave MATLAB toolbox. Acoustic pressure was measured on a plane in the field of a focused ultrasound transducer in free field conditions to be used as a Dirichlet boundary condition for simulations. Rectangular and wedge shaped olive oil scatterers were placed in the field and further pressure measurements were made in the far field for comparison with simulations. Good qualitative agreement was observed between the measured and simulated nonlinear pressure fields.

  19. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  20. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  1. Assimilation of satellite data to optimize large scale hydrological model parameters: a case study for the SWOT mission

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.

    2014-04-01

    During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 m × 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large scale river routing models which are typically employed in Land Surface Models (LSM) for global scale applications. The method consists in applying a data assimilation approach, the Extended Kalman Filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA-TRIP Continental Hydrologic System. Indeed, parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which might have locally significant errors. The current study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries. In addition, geopolitical issues in this region can restrict the exchange of hydrological data, so that SWOT should help improve this situation by making hydrological data freely available. In a previous study, the model was first evaluated against in-situ and satellite derived data sets within the framework of the international African Monsoon Multi-disciplinary Analysis (AMMA) project. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an Observing System Simulation Experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning

  2. NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF A CORONAL MAGNETIC FLUX ROPE SUPPORTING A LARGE-SCALE SOLAR FILAMENT FROM A PHOTOSPHERIC VECTOR MAGNETOGRAM

    SciTech Connect

    Jiang, Chaowei; Wu, S. T.; Hu, Qiang; Feng, Xueshang E-mail: wus@uah.edu E-mail: fengx@spaceweather.ac.cn

    2014-05-10

    Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength ≲ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.

  3. Nonlinear Force-free Field Extrapolation of a Coronal Magnetic Flux Rope Supporting a Large-scale Solar Filament from a Photospheric Vector Magnetogram

    NASA Astrophysics Data System (ADS)

    Jiang, Chaowei; Wu, S. T.; Feng, Xueshang; Hu, Qiang

    2014-05-01

    Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength <~ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.

  4. Optimization of a Fluorescence-Based Assay for Large-Scale Drug Screening against Babesia and Theileria Parasites

    PubMed Central

    Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo

    2015-01-01

    A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses. PMID:25915529

  5. Optimization of a Fluorescence-Based Assay for Large-Scale Drug Screening against Babesia and Theileria Parasites.

    PubMed

    Rizk, Mohamed Abdo; El-Sayed, Shimaa Abd El-Salam; Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo

    2015-01-01

    A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses. PMID:25915529

  6. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    SciTech Connect

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.

  7. Large-scale tracking and classification for automatic analysis of cell migration and proliferation, and experimental optimization of high-throughput screens of neuroblastoma cells.

    PubMed

    Harder, Nathalie; Batra, Richa; Diessl, Nicolle; Gogolin, Sina; Eils, Roland; Westermann, Frank; König, Rainer; Rohr, Karl

    2015-06-01

    Computational approaches for automatic analysis of image-based high-throughput and high-content screens are gaining increased importance to cope with the large amounts of data generated by automated microscopy systems. Typically, automatic image analysis is used to extract phenotypic information once all images of a screen have been acquired. However, also in earlier stages of large-scale experiments image analysis is important, in particular, to support and accelerate the tedious and time-consuming optimization of the experimental conditions and technical settings. We here present a novel approach for automatic, large-scale analysis and experimental optimization with application to a screen on neuroblastoma cell lines. Our approach consists of cell segmentation, tracking, feature extraction, classification, and model-based error correction. The approach can be used for experimental optimization by extracting quantitative information which allows experimentalists to optimally choose and to verify the experimental parameters. This involves systematically studying the global cell movement and proliferation behavior. Moreover, we performed a comprehensive phenotypic analysis of a large-scale neuroblastoma screen including the detection of rare division events such as multi-polar divisions. Major challenges of the analyzed high-throughput data are the relatively low spatio-temporal resolution in conjunction with densely growing cells as well as the high variability of the data. To account for the data variability we optimized feature extraction and classification, and introduced a gray value normalization technique as well as a novel approach for automatic model-based correction of classification errors. In total, we analyzed 4,400 real image sequences, covering observation periods of around 120 h each. We performed an extensive quantitative evaluation, which showed that our approach yields high accuracies of 92.2% for segmentation, 98.2% for tracking, and 86.5% for

  8. Understanding Uncertainties in Non-Linear Population Trajectories: A Bayesian Semi-Parametric Hierarchical Approach to Large-Scale Surveys of Coral Cover

    PubMed Central

    Vercelloni, Julie; Caley, M. Julian; Kayal, Mohsen; Low-Choy, Samantha; Mengersen, Kerrie

    2014-01-01

    Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making. PMID:25364915

  9. A Framework for Parallel Nonlinear Optimization by Partitioning Localized Constraints

    SciTech Connect

    Xu, You; Chen, Yixin

    2008-06-28

    We present a novel parallel framework for solving large-scale continuous nonlinear optimization problems based on constraint partitioning. The framework distributes constraints and variables to parallel processors and uses an existing solver to handle the partitioned subproblems. In contrast to most previous decomposition methods that require either separability or convexity of constraints, our approach is based on a new constraint partitioning theory and can handle nonconvex problems with inseparable global constraints. We also propose a hypergraph partitioning method to recognize the problem structure. Experimental results show that the proposed parallel algorithm can efficiently solve some difficult test cases.

  10. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  11. A Large Scale (N=400) Investigation of Gray Matter Differences in Schizophrenia Using Optimized Voxel-based Morphometry

    PubMed Central

    Meda, Shashwath A.; Giuliani, Nicole R.; Calhoun, Vince D.; Jagannathan, Kanchana; Schretlen, David J.; Pulver, Anne; Cascella, Nicola; Keshavan, Matcheri; Kates, Wendy; Buchanan, Robert; Sharma, Tonmoy; Pearlson, Godfrey D.

    2008-01-01

    Background Many studies have employed voxel-based morphometry (VBM) of MRI images as an automated method of investigating cortical gray matter differences in schizophrenia. However, results from these studies vary widely, likely due to different methodological or statistical approaches. Objective To use VBM to investigate gray matter differences in schizophrenia in a sample significantly larger than any published to date, and to increase statistical power sufficiently to reveal differences missed in smaller analyses. Methods Magnetic resonance whole brain images were acquired from four geographic sites, all using the same model 1.5T scanner and software version, and combined to form a sample of 200 patients with both first episode and chronic schizophrenia and 200 healthy controls, matched for age, gender and scanner location. Gray matter concentration was assessed and compared using optimized VBM. Results Compared to the healthy controls, schizophrenia patients showed significantly less gray matter concentration in multiple cortical and subcortical regions, some previously unreported. Overall, we found lower concentrations of gray matter in regions identified in prior studies, most of which reported only subsets of the affected areas. Conclusions Gray matter differences in schizophrenia are most comprehensively elucidated using a large, diverse and representative sample. PMID:18378428

  12. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  13. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  14. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE PAGESBeta

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; Liu, Yilu

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  15. Solving nonlinear equality constrained multiobjective optimization problems using neural networks.

    PubMed

    Mestari, Mohammed; Benzirar, Mohammed; Saber, Nadia; Khouil, Meryem

    2015-10-01

    This paper develops a neural network architecture and a new processing method for solving in real time, the nonlinear equality constrained multiobjective optimization problem (NECMOP), where several nonlinear objective functions must be optimized in a conflicting situation. In this processing method, the NECMOP is converted to an equivalent scalar optimization problem (SOP). The SOP is then decomposed into several-separable subproblems processable in parallel and in a reasonable time by multiplexing switched capacitor circuits. The approach which we propose makes use of a decomposition-coordination principle that allows nonlinearity to be treated at a local level and where coordination is achieved through the use of Lagrange multipliers. The modularity and the regularity of the neural networks architecture herein proposed make it suitable for very large scale integration implementation. An application to the resolution of a physical problem is given to show that the approach used here possesses some advantages of the point of algorithmic view, and provides processes of resolution often simpler than the usual techniques. PMID:25647664

  16. Numerical solution of nonlinear algebraic equations in stiff ODE solving (1986--89)---Quasi-Newton updating for large scale nonlinear systems (1989--90). Final report, 1986--1990

    SciTech Connect

    Walker, H.F.

    1990-12-31

    During the 1986--1989 project period, two major areas of research developed into which most of the work fell: ``matrix-free`` methods for solving linear systems, by which we mean iterative methods that require only the action of the coefficient matrix on vectors and not the coefficient matrix itself, and Newton-like methods for underdetermined nonlinear systems. In the 1990 project period of the renewal grant, a third major area of research developed: inexact Newton and Newton iterative methods and their applications to large-scale nonlinear systems, especially those arising in discretized problems. An inexact Newton method is any method in which each step reduces the norm of the local linear model of the function of interest. A Newton iterative method is any implementation of Newton`s method in which the linear systems that characterize Newton steps (the ``Newton equations``) are solved only approximately using an iterative linear solver. Newton iterative methods are properly considered special cases of inexact Newton methods. We describe the work in these areas and in other areas in this paper.

  17. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  18. Impact of ultrasound on solid-liquid extraction of phenolic compounds from maritime pine sawdust waste. Kinetics, optimization and large scale experiments.

    PubMed

    Meullemiestre, A; Petitcolas, E; Maache-Rezzoug, Z; Chemat, F; Rezzoug, S A

    2016-01-01

    Maritime pine sawdust, a by-product from industry of wood transformation, has been investigated as a potential source of polyphenols which were extracted by ultrasound-assisted maceration (UAM). UAM was optimized for enhancing extraction efficiency of polyphenols and reducing time-consuming. In a first time, a preliminary study was carried out to optimize the solid/liquid ratio (6g of dry material per mL) and the particle size (0.26 cm(2)) by conventional maceration (CVM). Under these conditions, the optimum conditions for polyphenols extraction by UAM, obtained by response surface methodology, were 0.67 W/cm(2) for the ultrasonic intensity (UI), 40°C for the processing temperature (T) and 43 min for the sonication time (t). UAM was compared with CVM, the results showed that the quantity of polyphenols was improved by 40% (342.4 and 233.5mg of catechin equivalent per 100g of dry basis, respectively for UAM and CVM). A multistage cross-current extraction procedure allowed evaluating the real impact of UAM on the solid-liquid extraction enhancement. The potential industrialization of this procedure was implemented through a transition from a lab sonicated reactor (3 L) to a large scale one with 30 L volume. PMID:26384903

  19. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  20. Nonlinear optimization for stochastic simulations.

    SciTech Connect

    Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.

    2003-12-01

    This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.

  1. Study of hybrid methods for approximating the Edgeworth-Pareto hull in nonlinear multicriteria optimization problems

    NASA Astrophysics Data System (ADS)

    Berezkin, V. E.; Lotov, A. V.; Lotova, E. A.

    2014-06-01

    Methods for approximating the Edgeworth-Pareto hull (EPH) of the set of feasible criteria vectors in nonlinear multicriteria optimization problems are examined. The relative efficiency of two EPH approximation methods based on classical methods of searching for local extrema of convolutions of criteria is experimentally studied for a large-scale applied problem (with several hundred variables). A hybrid EPH approximation method combining classical and genetic approximation methods is considered.

  2. Particle swarm optimization for complex nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos

    2016-06-01

    This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.

  3. Optimization of nonlinear aeroelastic tailoring criteria

    NASA Technical Reports Server (NTRS)

    Abdi, F.; Ide, H.; Shankar, V. J.; Sobieszczanski-Sobieski, J.

    1988-01-01

    A static flexible fighter aircraft wing configuration is presently addressed by a multilevel optimization technique, based on both a full-potential concept and a rapid structural optimization program, which can be applied to such aircraft-design problems as maneuver load control, aileron reversal, and lift effectiveness. It is found that nonlinearities are important in the design of an aircraft whose flight envelope encompasses the transonic regime, and that the present structural suboptimization produces a significantly lighter wing by reducing ply thicknesses.

  4. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  5. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  6. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  7. Optimization approaches to nonlinear model predictive control

    SciTech Connect

    Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)

    1991-01-01

    With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.

  8. Optimal singular control for nonlinear semistabilisation

    NASA Astrophysics Data System (ADS)

    L'Afflitto, Andrea; Haddad, Wassim M.

    2016-06-01

    The singular optimal control problem for asymptotic stabilisation has been extensively studied in the literature. In this paper, the optimal singular control problem is extended to address a weaker version of closed-loop stability, namely, semistability, which is of paramount importance for consensus control of network dynamical systems. Three approaches are presented to address the nonlinear semistable singular control problem. Namely, a singular perturbation method is presented to construct a state-feedback singular controller that guarantees closed-loop semistability for nonlinear systems. In this approach, we show that for a non-negative cost-to-go function the minimum cost of a nonlinear semistabilising singular controller is lower than the minimum cost of a singular controller that guarantees asymptotic stability of the closed-loop system. In the second approach, we solve the nonlinear semistable singular control problem by using the cost-to-go function to cancel the singularities in the corresponding Hamilton-Jacobi-Bellman equation. For this case, we show that the minimum value of the singular performance measure is zero. Finally, we provide a framework based on the concepts of state-feedback linearisation and feedback equivalence to solve the singular control problem for semistabilisation of nonlinear dynamical systems. For this approach, we also show that the minimum value of the singular performance measure is zero. Three numerical examples are presented to demonstrate the efficacy of the proposed singular semistabilisation frameworks.

  9. Optimal Parametric Feedback Excitation of Nonlinear Oscillators

    NASA Astrophysics Data System (ADS)

    Braun, David J.

    2016-01-01

    An optimal parametric feedback excitation principle is sought, found, and investigated. The principle is shown to provide an adaptive resonance condition that enables unprecedentedly robust movement generation in a large class of oscillatory dynamical systems. Experimental demonstration of the theory is provided by a nonlinear electronic circuit that realizes self-adaptive parametric excitation without model information, signal processing, and control computation. The observed behavior dramatically differs from the one achievable using classical parametric modulation, which is fundamentally limited by uncertainties in model information and nonlinear effects inevitably present in real world applications.

  10. Optimal Parametric Feedback Excitation of Nonlinear Oscillators.

    PubMed

    Braun, David J

    2016-01-29

    An optimal parametric feedback excitation principle is sought, found, and investigated. The principle is shown to provide an adaptive resonance condition that enables unprecedentedly robust movement generation in a large class of oscillatory dynamical systems. Experimental demonstration of the theory is provided by a nonlinear electronic circuit that realizes self-adaptive parametric excitation without model information, signal processing, and control computation. The observed behavior dramatically differs from the one achievable using classical parametric modulation, which is fundamentally limited by uncertainties in model information and nonlinear effects inevitably present in real world applications. PMID:26871336

  11. The role of large-scale eddies in the nonlinear equilibration of a multi-level model of the mid-latitude troposphere

    NASA Astrophysics Data System (ADS)

    Solomon, Amy Beth

    (1997). A three dimensional time dependent linear stability analysis is used to demonstrate that the equilibrated climate is stable to linear perturbations. These results are contrasted with the results of a one dimensional stability analysis to show the sensitivity of these results to the treatment of the meridional structure of the eddies. The feedbacks which maintain the static stability are shown to play a significant role in the homogenization of the potential vorticity above the atmospheric boundary layer (ABL). These feedbacks are also shown to couple the dynamics within the ABL with the upper troposphere in a study of the sensitivity of the vertical structure of the large scale eddies to changes in the radiative equilibrium temperature gradients. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  12. Nonlinear Brightness Optimization in Compton Scattering

    DOE PAGESBeta

    Hartemann, Fred V.; Wu, Sheldon S. Q.

    2013-07-26

    In Compton scattering light sources, a laser pulse is scattered by a relativistic electron beam to generate tunable x and gamma rays. Because of the inhomogeneous nature of the incident radiation, the relativistic Lorentz boost of the electrons is modulated by the ponderomotive force during the interaction, leading to intrinsic spectral broadening and brightness limitations. We discuss these effects, along with an optimization strategy to properly balance the laser bandwidth, diffraction, and nonlinear ponderomotive force.

  13. Nonlinear brightness optimization in compton scattering.

    PubMed

    Hartemann, Fred V; Wu, Sheldon S Q

    2013-07-26

    In Compton scattering light sources, a laser pulse is scattered by a relativistic electron beam to generate tunable x and gamma rays. Because of the inhomogeneous nature of the incident radiation, the relativistic Lorentz boost of the electrons is modulated by the ponderomotive force during the interaction, leading to intrinsic spectral broadening and brightness limitations. These effects are discussed, along with an optimization strategy to properly balance the laser bandwidth, diffraction, and nonlinear ponderomotive force. PMID:23931374

  14. Global mantle flow at ultra-high resolution: The competing influence of faulted plate margins, the strength of bending plates, and large-scale, nonlinear flow

    NASA Astrophysics Data System (ADS)

    Alisic, L.; Gurnis, M.; Stadler, G.; Burstedde, C.; Wilcox, L. C.; Ghattas, O.

    2009-12-01

    A full understanding of the dynamics of plate motions requires numerical models with a realistic, nonlinear rheology and a mesh resolution sufficiently high to resolve large variations in viscosity over short length scales. We suspect that resolutions as fine as 1 km locally in global models of the whole mantle and lithosphere are necessary. We use the adaptive mesh mantle convection code Rhea to model convection in the mantle with plates in both regional and global domains. Rhea is a new generation parallel finite element mantle convection code designed to scale to hundreds of thousands of compute cores. It uses forest-of-octree-based adaptive meshes via the p4est library. With Rhea's adaptive capabilities we can create local resolution down to ~ 1 km around plate boundaries, while keeping the mesh at a much coarser resolution away from small features. The global models in this study have approximately 160 million elements, a reduction of ~ 2000x compared to a uniform mesh of the same high resolution. The unprecedented resolution in these global models allows us, for the first time, to resolve viscous dissipation in the bending plate as well as observe the trade-off between this process and the strength of slabs and the resistance of dipping thrust faults. Since plate velocities and 'plateness' are dynamic outcomes of numerical modeling, we must carefully incorporate both the full buoyancy field and the details of all plate boundaries at a fine scale. The global models were constructed with detailed maps of the age of the plates and a thermal model of the seismicity-defined slabs which grades into the more diffuse buoyancy resolved with tomography. In the regional models, the thermal model consists of plates following a halfspace cooling model, and slabs for which buoyancy is conserved at every depth. A composite formulation of Newtonian and non-Newtonian rheology along with yielding is implemented; plate boundaries are modeled as very narrow weak zones. Plate

  15. Traveltime tomography and nonlinear constrained optimization

    SciTech Connect

    Berryman, J.G.

    1988-10-01

    Fermat's principle of least traveltime states that the first arrivals follow ray paths with the smallest overall traveltime from the point of transmission to the point of reception. This principle determines a definite convex set of feasible slowness models - depending only on the traveltime data - for the fully nonlinear traveltime inversion problem. The existence of such a convex set allows us to transform the inversion problem into a nonlinear constrained optimization problem. Fermat's principle also shows that the standard undamped least-squares solution to the inversion problem always produces a slowness model with many ray paths having traveltime shorter than the measured traveltime (an impossibility even if the trial ray paths are not the true ray paths). In a damped least-squares inversion, the damping parameter may be varied to allow efficient location of a slowness model on the feasibility boundary. 13 refs., 1 fig., 1 tab.

  16. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version

  17. Nonlinear simulations to optimize magnetic nanoparticle hyperthermia

    SciTech Connect

    Reeves, Daniel B. Weaver, John B.

    2014-03-10

    Magnetic nanoparticle hyperthermia is an attractive emerging cancer treatment, but the acting microscopic energy deposition mechanisms are not well understood and optimization suffers. We describe several approximate forms for the characteristic time of Néel rotations with varying properties and external influences. We then present stochastic simulations that show agreement between the approximate expressions and the micromagnetic model. The simulations show nonlinear imaginary responses and associated relaxational hysteresis due to the field and frequency dependencies of the magnetization. This suggests that efficient heating is possible by matching fields to particles instead of resorting to maximizing the power of the applied magnetic fields.

  18. Nonlinear Global Optimization Using Curdling Algorithm

    Energy Science and Technology Software Center (ESTSC)

    1996-03-01

    An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less

  19. Inverting magnetic meridian data using nonlinear optimization

    NASA Astrophysics Data System (ADS)

    Connors, Martin; Rostoker, Gordon

    2015-09-01

    A nonlinear optimization algorithm coupled with a model of auroral current systems allows derivation of physical parameters from data and is the basis of a new inversion technique. We refer to this technique as automated forward modeling (AFM), with the variant used here being automated meridian modeling (AMM). AFM is applicable on scales from regional to global, yielding simple and easily understood output, and using only magnetic data with no assumptions about electrodynamic parameters. We have found the most useful output parameters to be the total current and the boundaries of the auroral electrojet on a meridian densely populated with magnetometers, as derived by AMM. Here, we describe application of AFM nonlinear optimization to magnetic data and then describe the use of AMM to study substorms with magnetic data from ground meridian chains as input. AMM inversion results are compared to optical data, results from other inversion methods, and field-aligned current data from AMPERE. AMM yields physical parameters meaningful in describing local electrodynamics and is suitable for ongoing monitoring of activity. The relation of AMM model parameters to equivalent currents is discussed, and the two are found to compare well if the field-aligned currents are far from the inversion meridian.

  20. Nonlinearity Analysis and Parameters Optimization for an Inductive Angle Sensor

    PubMed Central

    Ye, Lin; Yang, Ming; Xu, Liang; Zhuang, Xiaoqi; Dong, Zhaopeng; Li, Shiyang

    2014-01-01

    Using the finite element method (FEM) and particle swarm optimization (PSO), a nonlinearity analysis based on parameter optimization is proposed to design an inductive angle sensor. Due to the structure complexity of the sensor, understanding the influences of structure parameters on the nonlinearity errors is a critical step in designing an effective sensor. Key parameters are selected for the design based on the parameters' effects on the nonlinearity errors. The finite element method and particle swarm optimization are combined for the sensor design to get the minimal nonlinearity error. In the simulation, the nonlinearity error of the optimized sensor is 0.053% in the angle range from −60° to 60°. A prototype sensor is manufactured and measured experimentally, and the experimental nonlinearity error is 0.081% in the angle range from −60° to 60°. PMID:24590353

  1. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  2. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  3. Matching trajectory optimization and nonlinear tracking control for HALE

    NASA Astrophysics Data System (ADS)

    Lee, Sangjong; Jang, Jieun; Ryu, Hyeok; Lee, Kyun Ho

    2014-11-01

    This paper concerns optimal trajectory generation and nonlinear tracking control for stratospheric airship platform of VIA-200. To compensate for the mismatch between the point-mass model of optimal trajectory and the 6-DOF model of the nonlinear tracking problem, a new matching trajectory optimization approach is proposed. The proposed idea reduces the dissimilarity of both problems and reduces the uncertainties in the nonlinear equations of motion for stratospheric airship. In addition, its refined optimal trajectories yield better results under jet stream conditions during flight. The resultant optimal trajectories of VIA-200 are full three-dimensional ascent flight trajectories reflecting the realistic constraints of flight conditions and airship performance with and without a jet stream. Finally, 6-DOF nonlinear equations of motion are derived, including a moving wind field, and the vectorial backstepping approach is applied. The desirable tracking performance is demonstrated that application of the proposed matching optimization method enables the smooth linkage of trajectory optimization to tracking control problems.

  4. Large-scale hydropower system optimization using dynamic programming and object-oriented programming: the case of the Northeast China Power Grid.

    PubMed

    Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R

    2013-01-01

    This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results. PMID:24334896

  5. Interpretation of large-scale deviations from the Hubble flow

    NASA Astrophysics Data System (ADS)

    Grinstein, B.; Politzer, H. David; Rey, S.-J.; Wise, Mark B.

    1987-03-01

    The theoretical expectation for large-scale streaming velocities relative to the Hubble flow is expressed in terms of statistical correlation functions. Only for objects that trace the mass would these velocities have a simple cosmological interpretation. If some biasing effects the objects' formation, then nonlinear gravitational evolution is essential to predicting the expected large-scale velocities, which also depend on the nature of the biasing.

  6. Guaranteed robustness properties of multivariable nonlinear stochastic optimal regulators

    NASA Technical Reports Server (NTRS)

    Tsitsiklis, J. N.; Athans, M.

    1984-01-01

    The robustness of optimal regulators for nonlinear, deterministic and stochastic, multi-input dynamical systems is studied under the assumption that all state variables can be measured. It is shown that, under mild assumptions, such nonlinear regulators have a guaranteed infinite gain margin; moreover, they have a guaranteed 50 percent gain reduction margin and a 60 degree phase margin, in each feedback channel, provided that the system is linear in the control and the penalty to the control is quadratic, thus extending the well-known properties of LQ regulators to nonlinear optimal designs. These results are also valid for infinite horizon, average cost, stochastic optimal control problems.

  7. Guaranteed robustness properties of multivariable, nonlinear, stochastic optimal regulators

    NASA Technical Reports Server (NTRS)

    Tsitsiklis, J. N.; Athans, M.

    1983-01-01

    The robustness of optimal regulators for nonlinear, deterministic and stochastic, multi-input dynamical systems is studied under the assumption that all state variables can be measured. It is shown that, under mild assumptions, such nonlinear regulators have a guaranteed infinite gain margin; moreover, they have a guaranteed 50 percent gain reduction margin and a 60 degree phase margin, in each feedback channel, provided that the system is linear in the control and the penalty to the control is quadratic, thus extending the well-known properties of LQ regulators to nonlinear optimal designs. These results are also valid for infinite horizon, average cost, stochastic optimal control problems.

  8. On a Highly Nonlinear Self-Obstacle Optimal Control Problem

    SciTech Connect

    Di Donato, Daniela; Mugnai, Dimitri

    2015-10-15

    We consider a non-quadratic optimal control problem associated to a nonlinear elliptic variational inequality, where the obstacle is the control itself. We show that, fixed a desired profile, there exists an optimal solution which is not far from it. Detailed characterizations of the optimal solution are given, also in terms of approximating problems.

  9. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  10. Lyapunov optimal feedback control of a nonlinear inverted pendulum

    NASA Technical Reports Server (NTRS)

    Grantham, W. J.; Anderson, M. J.

    1989-01-01

    Liapunov optimal feedback control is applied to a nonlinear inverted pendulum in which the control torque was constrained to be less than the nonlinear gravity torque in the model. This necessitates a control algorithm which 'rocks' the pendulum out of its potential wells, in order to stabilize it at a unique vertical position. Simulation results indicate that a preliminary Liapunov feedback controller can successfully overcome the nonlinearity and bring almost all trajectories to the target.

  11. Revisiting interferences for measuring and optimizing optical nonlinearities

    NASA Astrophysics Data System (ADS)

    Billard, F.; Béjot, P.; Hertz, E.; Lavorel, B.; Faucher, O.

    2013-07-01

    A method based on optical interferences for measuring optical nonlinearities is presented. In a proof-of-principle experiment, the technique is applied to the experimental determination of the intensity dependence of the photoionization process. It is shown that it can also be used to control and optimize the nonlinear process itself at constant input energy. The presented strategy leads to enhancements that can reach several orders of magnitude for highly nonlinear processes.

  12. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  13. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  14. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  15. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  16. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  17. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    NASA Astrophysics Data System (ADS)

    Diwadkar, Amit; Vaidya, Umesh

    2016-04-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies.

  18. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links.

    PubMed

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  19. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  20. Genetic Algorithm Based Neural Networks for Nonlinear Optimization

    Energy Science and Technology Software Center (ESTSC)

    1994-09-28

    This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmore » the energy surface.« less

  1. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  2. Implicit solution of large-scale radiation diffusion problems

    SciTech Connect

    Brown, P N; Graziani, F; Otero, I; Woodward, C S

    2001-01-04

    In this paper, we present an efficient solution approach for fully implicit, large-scale, nonlinear radiation diffusion problems. The fully implicit approach is compared to a semi-implicit solution method. Accuracy and efficiency are shown to be better for the fully implicit method on both one- and three-dimensional problems with tabular opacities taken from the LEOS opacity library.

  3. Nonlinear model predictive control based on collective neurodynamic optimization.

    PubMed

    Yan, Zheng; Wang, Jun

    2015-04-01

    In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach. PMID:25608315

  4. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  5. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  6. Decentralized stabilization for a class of continuous-time nonlinear interconnected systems using online learning optimal control approach.

    PubMed

    Liu, Derong; Wang, Ding; Li, Hongliang

    2014-02-01

    In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme. PMID:24807039

  7. On optimal nonlinear estimation. I - Continuous observation.

    NASA Technical Reports Server (NTRS)

    Lo, J. T.

    1973-01-01

    A generalization of Bucy's (1965) representation theorem is obtained under very weak hypotheses. The generalized theorem is shown to play the same role in the case of general optimal estimation for an arbitrary random process as does the Bucy theorem in the case of optimal filtering for a diffusion process. At least for the models considered, the possibility is pointed out to reduce all sequential estimation problems to the problem of filtering. Hence, filtering theory is seen to represent the core of estimation theory, and is believed to define the direction in which future research should be focused.

  8. Online optimization of storage ring nonlinear beam dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Xiaobiao; Safranek, James

    2015-08-01

    We propose to optimize the nonlinear beam dynamics of existing and future storage rings with direct online optimization techniques. This approach may have crucial importance for the implementation of diffraction limited storage rings. In this paper considerations and algorithms for the online optimization approach are discussed. We have applied this approach to experimentally improve the dynamic aperture of the SPEAR3 storage ring with the robust conjugate direction search method and the particle swarm optimization method. The dynamic aperture was improved by more than 5 mm within a short period of time. Experimental setup and results are presented.

  9. Picosecond laser-driven terahertz radiation from large scale preplasmas of solid targets

    NASA Astrophysics Data System (ADS)

    Liao, G. Q.; Li, Y. T.; Li, C.; Su, L. N.; Zheng, Y.; Liu, M.; Dunn, J.; Nilsen, J.; Hunter, J.; Wang, W. M.; Sheng, Z. M.; Zhang, J.

    2016-05-01

    The terahertz (THz) radiation from the front of solid targets with a large-scale preplasma irradiated by relativistic picosecond laser pulses has been studied. The THz radiation measured at the specular direction nonlinearly increases with laser energy and an optimal plasma density scalelength is observed. Particle-in-cell simulations indicate that the radiation can be attributed to the model of mode conversion. While the THz radiation near the target normal direction is saturated with laser energy and plasma scalelength. Unlike the radiation in the specular direction’ the transient current formed at the plasma-vacuum interface could be responsible for the radiation near the target normal.

  10. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  11. Optimal state discrimination and unstructured search in nonlinear quantum mechanics

    NASA Astrophysics Data System (ADS)

    Childs, Andrew M.; Young, Joshua

    2016-02-01

    Nonlinear variants of quantum mechanics can solve tasks that are impossible in standard quantum theory, such as perfectly distinguishing nonorthogonal states. Here we derive the optimal protocol for distinguishing two states of a qubit using the Gross-Pitaevskii equation, a model of nonlinear quantum mechanics that arises as an effective description of Bose-Einstein condensates. Using this protocol, we present an algorithm for unstructured search in the Gross-Pitaevskii model, obtaining an exponential improvement over a previous algorithm of Meyer and Wong. This result establishes a limitation on the effectiveness of the Gross-Pitaevskii approximation. More generally, we demonstrate similar behavior under a family of related nonlinearities, giving evidence that the ability to quickly discriminate nonorthogonal states and thereby solve unstructured search is a generic feature of nonlinear quantum mechanics.

  12. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations

    PubMed Central

    Baranwal, Vipul K.; Pandey, Ram K.

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ0, γ1, γ2,… and auxiliary functions H0(x), H1(x), H2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  13. Nonlinear optimization with linear constraints using a projection method

    NASA Technical Reports Server (NTRS)

    Fox, T.

    1982-01-01

    Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.

  14. Route Monopolie and Optimal Nonlinear Pricing

    NASA Technical Reports Server (NTRS)

    Tournut, Jacques

    2003-01-01

    To cope with air traffic growth and congested airports, two solutions are apparent on the supply side: 1) use larger aircraft in the hub and spoke system; or 2) develop new routes through secondary airports. An enlarged route system through secondary airports may increase the proportion of route monopolies in the air transport market.The monopoly optimal non linear pricing policy is well known in the case of one dimension (one instrument, one characteristic) but not in the case of several dimensions. This paper explores the robustness of the one dimensional screening model with respect to increasing the number of instruments and the number of characteristics. The objective of this paper is then to link and fill the gap in both literatures. One of the merits of the screening model has been to show that a great varieD" of economic questions (non linear pricing, product line choice, auction design, income taxation, regulation...) could be handled within the same framework.VCe study a case of non linear pricing (2 instruments (2 routes on which the airline pro_ddes customers with services), 2 characteristics (demand of services on these routes) and two values per characteristic (low and high demand of services on these routes)) and we show that none of the conclusions of the one dimensional analysis remain valid. In particular, upward incentive compatibility constraint may be binding at the optimum. As a consequence, they may be distortion at the top of the distribution. In addition to this, we show that the optimal solution often requires a kind of form of bundling, we explain explicitly distortions and show that it is sometimes optimal for the monopolist to only produce one good (instead of two) or to exclude some buyers from the market. Actually, this means that the monopolist cannot fully apply his monopoly power and is better off selling both goods independently.We then define all the possible solutions in the case of a quadratic cost function for a uniform

  15. Fully localised nonlinear energy growth optimals in pipe flow

    NASA Astrophysics Data System (ADS)

    Pringle, Chris C. T.; Willis, Ashley P.; Kerswell, Rich R.

    2015-06-01

    A new, fully localised, energy growth optimal is found over large times and in long pipe domains at a given mass flow rate. This optimal emerges at a threshold disturbance energy below which a nonlinear version of the known (streamwise-independent) linear optimal [P. J. Schmid and D. S. Henningson, "Optimal energy density growth in Hagen-Poiseuille flow," J. Fluid Mech. 277, 192-225 (1994)] is selected and appears to remain the optimal up until the critical energy at which transition is triggered. The form of this optimal is similar to that found in short pipes [Pringle et al., "Minimal seeds for shear flow turbulence: Using nonlinear transient growth to touch the edge of chaos," J. Fluid Mech. 702, 415-443 (2012)], but now with full localisation in the streamwise direction. This fully localised optimal perturbation represents the best approximation yet of the minimal seed (the smallest perturbation which is arbitrarily close to states capable of triggering a turbulent episode) for "real" (laboratory) pipe flows. Dependence of the optimal with respect to several parameters has been computed and establishes that the structure is robust.

  16. Fully localised nonlinear energy growth optimals in pipe flow

    SciTech Connect

    Pringle, Chris C. T.; Willis, Ashley P.; Kerswell, Rich R.

    2015-06-15

    A new, fully localised, energy growth optimal is found over large times and in long pipe domains at a given mass flow rate. This optimal emerges at a threshold disturbance energy below which a nonlinear version of the known (streamwise-independent) linear optimal [P. J. Schmid and D. S. Henningson, “Optimal energy density growth in Hagen-Poiseuille flow,” J. Fluid Mech. 277, 192–225 (1994)] is selected and appears to remain the optimal up until the critical energy at which transition is triggered. The form of this optimal is similar to that found in short pipes [Pringle et al., “Minimal seeds for shear flow turbulence: Using nonlinear transient growth to touch the edge of chaos,” J. Fluid Mech. 702, 415–443 (2012)], but now with full localisation in the streamwise direction. This fully localised optimal perturbation represents the best approximation yet of the minimal seed (the smallest perturbation which is arbitrarily close to states capable of triggering a turbulent episode) for “real” (laboratory) pipe flows. Dependence of the optimal with respect to several parameters has been computed and establishes that the structure is robust.

  17. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  18. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  19. Economic dispatch control for large scale thermal power systems

    SciTech Connect

    Not Available

    1986-01-01

    A realistic model for economic dispatch control (EDC) which is valid for large scale thermal power system is described. This model properly accounts for the nonlinearities of the generation cost-curves introduced by the operation constraints of thermal units. The methodology of this model computes the optimal readjustments of generation schedules such that their total generation output meets the system demand, including the Area Control Error (ACE). The objective function to be minimized is the instantaneous operating cost of a power system subjected to several equality and inequality constraints, which represent the performance characteristics and operation limitations of the various units in the system as well as the active power loss in the transmission network. The generation cost curves and the active losses are represented using one of two models. The first model includes the exact piecewise linear curve formulation and the well known loss formula, while the second one considers a second order polynomial approximation of the generation cost curves and assumes that the active network losses are independent on the generation configuration and have constant percentage value from the total system demand. Each of these models has its merits to EDC strategies. 10 references, 7 figures, 3 tables.

  20. An active set algorithm for nonlinear optimization with polyhedral constraints

    NASA Astrophysics Data System (ADS)

    Hager, William W.; Zhang, Hongchao

    2016-08-01

    A polyhedral active set algorithm PASA is developed for solving a nonlinear optimization problem whose feasible set is a polyhedron. Phase one of the algorithm is the gradient projection method, while phase two is any algorithm for solving a linearly constrained optimization problem. Rules are provided for branching between the two phases. Global convergence to a stationary point is established, while asymptotically PASA performs only phase two when either a nondegeneracy assumption holds, or the active constraints are linearly independent and a strong second-order sufficient optimality condition holds.

  1. Nonlinear Inertia Weighted Teaching-Learning-Based Optimization for Solving Global Optimization Problem

    PubMed Central

    Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru

    2015-01-01

    Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well. PMID:26421005

  2. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  3. Optimal bipedal interactions with dynamic terrain: synthesis and analysis via nonlinear programming

    NASA Astrophysics Data System (ADS)

    Hubicki, Christian; Goldman, Daniel; Ames, Aaron

    In terrestrial locomotion, gait dynamics and motor control behaviors are tuned to interact efficiently and stably with the dynamics of the terrain (i.e. terradynamics). This controlled interaction must be particularly thoughtful in bipeds, as their reduced contact points render them highly susceptible to falls. While bipedalism under rigid terrain assumptions is well-studied, insights for two-legged locomotion on soft terrain, such as sand and dirt, are comparatively sparse. We seek an understanding of how biological bipeds stably and economically negotiate granular media, with an eye toward imbuing those abilities in bipedal robots. We present a trajectory optimization method for controlled systems subject to granular intrusion. By formulating a large-scale nonlinear program (NLP) with reduced-order resistive force theory (RFT) models and jamming cone dynamics, the optimized motions are informed and shaped by the dynamics of the terrain. Using a variant of direct collocation methods, we can express all optimization objectives and constraints in closed-form, resulting in rapid solving by standard NLP solvers, such as IPOPT. We employ this tool to analyze emergent features of bipedal locomotion in granular media, with an eye toward robotic implementation.

  4. Gravity and large-scale nonlocal bias

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Scoccimarro, Román; Sheth, Ravi K.

    2012-04-01

    For Gaussian primordial fluctuations the relationship between galaxy and matter overdensities, bias, is most often assumed to be local at the time of observation in the large-scale limit. This hypothesis is however unstable under time evolution, we provide proofs under several (increasingly more realistic) sets of assumptions. In the simplest toy model galaxies are created locally and linearly biased at a single formation time, and subsequently move with the dark matter (no velocity bias) conserving their comoving number density (no merging). We show that, after this formation time, the bias becomes unavoidably nonlocal and nonlinear at large scales. We identify the nonlocal gravitationally induced fields in which the galaxy overdensity can be expanded, showing that they can be constructed out of the invariants of the deformation tensor (Galileons), the main signature of which is a quadrupole field in second-order perturbation theory. In addition, we show that this result persists if we include an arbitrary evolution of the comoving number density of tracers. We then include velocity bias, and show that new contributions appear; these are related to the breaking of Galilean invariance of the bias relation, a dipole field being the signature at second order. We test these predictions by studying the dependence of halo overdensities in cells of fixed dark matter density: measurements in simulations show that departures from the mean bias relation are strongly correlated with the nonlocal gravitationally induced fields identified by our formalism, suggesting that the halo distribution at the present time is indeed more closely related to the mass distribution at an earlier rather than present time. However, the nonlocality seen in the simulations is not fully captured by assuming local bias in Lagrangian space. The effects on nonlocal bias seen in the simulations are most important for the most biased halos, as expected from our predictions. Accounting for these

  5. Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet

    Energy Science and Technology Software Center (ESTSC)

    1997-08-05

    An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmore » not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.« less

  6. Optimal spacecraft attitude control using collocation and nonlinear programming

    NASA Astrophysics Data System (ADS)

    Herman, A. L.; Conway, B. A.

    1992-10-01

    Direct collocation with nonlinear programming (DCNLP) is employed to find the optimal open-loop control histories for detumbling a disabled satellite. The controls are torques and forces applied to the docking arm and joint and torques applied about the body axes of the OMV. Solutions are obtained for cases in which various constraints are placed on the controls and in which the number of controls is reduced or increased from that considered in Conway and Widhalm (1986). DCLNP works well when applied to the optimal control problem of satellite attitude control. The formulation is straightforward and produces good results in a relatively small amount of time on a Cray X/MP with no a priori information about the optimal solution. The addition of joint acceleration to the controls significantly reduces the control magnitudes and optimal cost. In all cases, the torques and acclerations are modest and the optimal cost is very modest.

  7. Lagrangian space consistency relation for large scale structure

    NASA Astrophysics Data System (ADS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  8. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  9. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  10. Design of Life Extending Controls Using Nonlinear Parameter Optimization

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Holmes, Michael S.; Ray, Asok

    1998-01-01

    This report presents the conceptual development of a life extending control system where the objective is to achieve high performance and structural durability of the plant. A life extending controller is designed for a reusable rocket engine via damage mitigation in both the fuel and oxidizer turbines while achieving high performance for transient responses of the combustion chamber pressure and the O2/H2 mixture ratio. This design approach makes use of a combination of linear and nonlinear controller synthesis techniques and also allows adaptation of the life extending controller module to augment a conventional performance controller of a rocket engine. The nonlinear aspect of the design is achieved using nonlinear parameter optimization of a prescribed control structure.

  11. OPT++: An object-oriented class library for nonlinear optimization

    SciTech Connect

    Meza, J.C.

    1994-03-01

    Object-oriented programming is becoming a popular way of developing new software. The promise of this new programming paradigm is that software developed through these concepts will be more reliable and easier to re-use, thereby decreasing the time and cost of the software development cycle. This report describes the development of a C++ class library for nonlinear optimization. Using object-oriented techniques, this new library was designed so that the interface is easy to use while being general enough so that new optimization algorithms can be added easily to the existing framework.

  12. Combining flux and energy balance analysis to model large-scale biochemical networks.

    PubMed

    Heuett, William J; Qian, Hong

    2006-12-01

    Stoichiometric Network Theory is a constraints-based, optimization approach for quantitative analysis of the phenotypes of large-scale biochemical networks that avoids the use of detailed kinetics. This approach uses the reaction stoichiometric matrix in conjunction with constraints provided by flux balance and energy balance to guarantee mass conserved and thermodynamically allowable predictions. However, the flux and energy balance constraints have not been effectively applied simultaneously on the genome scale because optimization under the combined constraints is non-linear. In this paper, a sequential quadratic programming algorithm that solves the non-linear optimization problem is introduced. A simple example and the system of fermentation in Saccharomyces cerevisiae are used to illustrate the new method. The algorithm allows the use of non-linear objective functions. As a result, we suggest a novel optimization with respect to the heat dissipation rate of a system. We also emphasize the importance of incorporating interactions between a model network and its surroundings. PMID:17245812

  13. Continuation and bifurcation analysis of large-scale dynamical systems with LOCA.

    SciTech Connect

    Salinger, Andrew Gerhard; Phipps, Eric Todd; Pawlowski, Roger Patrick

    2010-06-01

    Dynamical systems theory provides a powerful framework for understanding the behavior of complex evolving systems. However applying these ideas to large-scale dynamical systems such as discretizations of multi-dimensional PDEs is challenging. Such systems can easily give rise to problems with billions of dynamical variables, requiring specialized numerical algorithms implemented on high performance computing architectures with thousands of processors. This talk will describe LOCA, the Library of Continuation Algorithms, a suite of scalable continuation and bifurcation tools optimized for these types of systems that is part of the Trilinos software collection. In particular, we will describe continuation and bifurcation analysis techniques designed for large-scale dynamical systems that are based on specialized parallel linear algebra methods for solving augmented linear systems. We will also discuss several other Trilinos tools providing nonlinear solvers (NOX), eigensolvers (Anasazi), iterative linear solvers (AztecOO and Belos), preconditioners (Ifpack, ML, Amesos) and parallel linear algebra data structures (Epetra and Tpetra) that LOCA can leverage for efficient and scalable analysis of large-scale dynamical systems.

  14. Passive and Active Vibrations Allow Self-Organization in Large-Scale Electromechanical Systems

    NASA Astrophysics Data System (ADS)

    Buscarino, Arturo; Fortuna, Carlo Famoso Luigi; Frasca, Mattia

    2016-06-01

    In this paper, the role of passive and active vibrations for the control of nonlinear large-scale electromechanical systems is investigated. The mathematical model of the system is discussed and detailed experimental results are shown in order to prove that coupling the effects of feedback and vibrations elicited by proper control signals makes possible to regularize imperfect uncertain large-scale systems.

  15. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  16. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  17. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  18. Topology optimization for nonlinear dynamic problems: Considerations for automotive crashworthiness

    NASA Astrophysics Data System (ADS)

    Kaushik, Anshul; Ramani, Anand

    2014-04-01

    Crashworthiness of automotive structures is most often engineered after an optimal topology has been arrived at using other design considerations. This study is an attempt to incorporate crashworthiness requirements upfront in the topology synthesis process using a mathematically consistent framework. It proposes the use of equivalent linear systems from the nonlinear dynamic simulation in conjunction with a discrete-material topology optimizer. Velocity and acceleration constraints are consistently incorporated in the optimization set-up. Issues specific to crash problems due to the explicit solution methodology employed, nature of the boundary conditions imposed on the structure, etc. are discussed and possible resolutions are proposed. A demonstration of the methodology on two-dimensional problems that address some of the structural requirements and the types of loading typical of frontal and side impact is provided in order to show that this methodology has the potential for topology synthesis incorporating crashworthiness requirements.

  19. A hybrid nonlinear programming method for design optimization

    NASA Technical Reports Server (NTRS)

    Rajan, S. D.

    1986-01-01

    Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.

  20. Spin glasses and nonlinear constraints in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    2014-01-01

    We discuss the portfolio optimization problem with the obligatory deposits constraint. Recently it has been shown that as a consequence of this nonlinear constraint, the solution consists of an exponentially large number of optimal portfolios, completely different from each other, and extremely sensitive to any changes in the input parameters of the problem, making the concept of rational decision making questionable. Here we reformulate the problem using a quadratic obligatory deposits constraint, and we show that from the physics point of view, finding an optimal portfolio amounts to calculating the mean-field magnetizations of a random Ising model with the constraint of a constant magnetization norm. We show that the model reduces to an eigenproblem, with 2N solutions, where N is the number of assets defining the portfolio. Also, in order to illustrate our results, we present a detailed numerical example of a portfolio of several risky common stocks traded on the Nasdaq Market.

  1. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    NASA Astrophysics Data System (ADS)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  2. Photorealistic large-scale urban city model reconstruction.

    PubMed

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite). PMID:19423889

  3. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  4. Simulation-based optimal Bayesian experimental design for nonlinear systems

    SciTech Connect

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics.

  5. Optimal Complexity of Nonlinear Rainfall-Runoff Models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J.; van de Giesen, N.; Fenicia, F.

    2008-12-01

    Identification of an appropriate level of model complexity to accurately translate rainfall into runoff remains an unresolved issue. The model has to be complex enough to generate accurate predictions, but not too complex such that its parameters cannot be reliably estimated from the data. Earlier work with linear models (Jakeman and Hornberger, 1993) concluded that a model with 4 to 5 parameters is sufficient. However, more recent results with a nonlinear model (Vrugt et al., 2006) suggest that 10 or more parameters may be identified from daily rainfall-runoff time-series. The goal here is to systematically investigate optimal complexity of nonlinear rainfall-runoff models, yielding accurate models with identifiable parameters. Our methodology consists of four steps: (i) a priori specification of a family of model structures from which to pick an optimal one, (ii) parameter optimization of each model structure to estimate empirical or calibration error, (iii) estimation of parameter uncertainty of each calibrated model structure, and (iv) estimation of prediction error of each calibrated model structure. For the first step we formulate a flexible model structure that allows us to systematically vary the complexity with which physical processes are simulated. The second and third steps are achieved using a recently developed Markov chain Monte Carlo algorithm (DREAM), which minimizes calibration error yielding optimal parameter values and their underlying posterior probability density function. Finally, we compare several methods for estimating prediction error of each model structure, including statistical methods based on information criteria and split-sample calibration-validation. Estimates of parameter uncertainty and prediction error are then used to identify optimal complexity for rainfall-runoff modeling, using data from dry and wet MOPEX catchments as case studies.

  6. A forward method for optimal stochastic nonlinear and adaptive control

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1988-01-01

    A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.

  7. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  8. Nonlinear optimization of acoustic energy harvesting using piezoelectric devices.

    PubMed

    Lallart, Mickaeël; Guyomar, Daniel; Richard, Claude; Petit, Lionel

    2010-11-01

    In the first part of the paper, a single degree-of-freedom model of a vibrating membrane with piezoelectric inserts is introduced and is initially applied to the case when a plane wave is incident with frequency close to one of the resonance frequencies. The model is a prototype of a device which converts ambient acoustical energy to electrical energy with the use of piezoelectric devices. The paper then proposes an enhancement of the energy harvesting process using a nonlinear processing of the output voltage of piezoelectric actuators, and suggests that this improves the energy conversion and reduces the sensitivity to frequency drifts. A theoretical discussion is given for the electrical power that can be expected making use of various models. This and supporting experimental results suggest that a nonlinear optimization approach allows a gain of up to 10 in harvested energy and a doubling of the bandwidth. A model is introduced in the latter part of the paper for predicting the behavior of the energy-harvesting device with changes in acoustic frequency, this model taking into account the damping effect and the frequency changes introduced by the nonlinear processes in the device. PMID:21110569

  9. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  10. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  11. Solving Large-scale Eigenvalue Problems in SciDACApplications

    SciTech Connect

    Yang, Chao

    2005-06-29

    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  12. Large Scale Commodity Clusters for Lattice QCD

    SciTech Connect

    A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson

    2002-06-01

    We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.

  13. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  14. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  15. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-07-01

    The Jacksonville Electric Authority's large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy's Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process in included.

  16. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-04-01

    The Jacksonville Electric Authority`s large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy`s Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process is included.

  17. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  18. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  19. Large scale structure in universes dominated by cold dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard

    1986-01-01

    The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.

  20. Nonlinear Burn Control and Operating Point Optimization in ITER

    NASA Astrophysics Data System (ADS)

    Boyer, Mark; Schuster, Eugenio

    2013-10-01

    Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).

  1. Optimal spatiotemporal reduced order modeling for nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    LaBryer, Allen

    Proposed in this dissertation is a novel reduced order modeling (ROM) framework called optimal spatiotemporal reduced order modeling (OPSTROM) for nonlinear dynamical systems. The OPSTROM approach is a data-driven methodology for the synthesis of multiscale reduced order models (ROMs) which can be used to enhance the efficiency and reliability of under-resolved simulations for nonlinear dynamical systems. In the context of nonlinear continuum dynamics, the OPSTROM approach relies on the concept of embedding subgrid-scale models into the governing equations in order to account for the effects due to unresolved spatial and temporal scales. Traditional ROMs neglect these effects, whereas most other multiscale ROMs account for these effects in ways that are inconsistent with the underlying spatiotemporal statistical structure of the nonlinear dynamical system. The OPSTROM framework presented in this dissertation begins with a general system of partial differential equations, which are modified for an under-resolved simulation in space and time with an arbitrary discretization scheme. Basic filtering concepts are used to demonstrate the manner in which residual terms, representing subgrid-scale dynamics, arise with a coarse computational grid. Models for these residual terms are then developed by accounting for the underlying spatiotemporal statistical structure in a consistent manner. These subgrid-scale models are designed to provide closure by accounting for the dynamic interactions between spatiotemporal macroscales and microscales which are otherwise neglected in a ROM. For a given resolution, the predictions obtained with the modified system of equations are optimal (in a mean-square sense) as the subgrid-scale models are based upon principles of mean-square error minimization, conditional expectations and stochastic estimation. Methods are suggested for efficient model construction, appraisal, error measure, and implementation with a couple of well-known time

  2. Optimization of microscopic and macroscopic second order optical nonlinearities

    NASA Technical Reports Server (NTRS)

    Marder, Seth R.; Perry, Joseph W.

    1993-01-01

    Nonlinear optical materials (NLO) can be used to extend the useful frequency range of lasers. Frequency generation is important for laser-based remote sensing and optical data storage. Another NLO effect, the electro-optic effect, can be used to modulate the amplitude, phase, or polarization state of an optical beam. Applications of this effect in telecommunications and in integrated optics include the impression of information on an optical carrier signal or routing of optical signals between fiber optic channels. In order to utilize these effects most effectively, it is necessary to synthesize materials which respond to applied fields very efficiently. In this talk, it will be shown how the development of a fundamental understanding of the science of nonlinear optics can lead to a rational approach to organic molecules and materials with optimized properties. In some cases, figures of merit for newly developed materials are more than an order of magnitude higher than those of currently employed materials. Some of these materials are being examined for phased-array radar and other electro-optic switching applications.

  3. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  4. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  5. Nonlinearly-constrained optimization using asynchronous parallel generating set search.

    SciTech Connect

    Griffin, Joshua D.; Kolda, Tamara Gibson

    2007-05-01

    Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivative-free optimization of unconstrained and linearly-constrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearly-constrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivative-based optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothed-exact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since many CS&E optimization problems are characterized by expensive function evaluations, reducing the number of function evaluations is paramount, and the results of this paper show that exact and smoothed-exact penalty functions are well-suited to this task.

  6. Slow, large scales from fast, small ones in dispersive wave turbulence

    NASA Astrophysics Data System (ADS)

    Smith, Leslie; Waleffe, Fabian

    2000-11-01

    Dispersive wave turbulence in systems of geophysical interest (beta-plane, rotating, stratified and rotating-stratified flows) has been simulated with random, isotropic small scale forcing and hyper-viscosity. This can be thought of as a Langevin model of the small space-time scales only with potential implications for climate modeling. In all cases, slow, coherent large scales are generated after long times of 2nd order in the nonlinear time scale. These slow, large scales ultimately dominate the flows. Beta-plane and rotating flow results were reported earlier [PoF 11, 1608]. In stratified flows, the energy accumulates in a 1D vertically sheared flow at selected large scales. As the rotation rate is increased, a progressive transition toward generation of all large scale vortical zero modes (quasi-geostrophic 3D flow) is observed. For yet higher rotation rate, energy accumulates primarily in a 2D quasi-geostrophic flow (cyclonic vortices) at all large scales.

  7. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  8. Large-scale extraction of proteins.

    PubMed

    Cunha, Teresa; Aires-Barros, Raquel

    2002-01-01

    The production of foreign proteins using selected host with the necessary posttranslational modifications is one of the key successes in modern biotechnology. This methodology allows the industrial production of proteins that otherwise are produced in small quantities. However, the separation and purification of these proteins from the fermentation media constitutes a major bottleneck for the widespread commercialization of recombinant proteins. The major production costs (50-90%) for typical biological product resides in the purification strategy. There is a need for efficient, effective, and economic large-scale bioseparation techniques, to achieve high purity and high recovery, while maintaining the biological activity of the molecule. Aqueous two-phase systems (ATPS) allow process integration as simultaneously separation and concentration of the target protein is achieved, with posterior removal and recycle of the polymer. The ease of scale-up combined with the high partition coefficients obtained allow its potential application in large-scale downstream processing of proteins produced by fermentation. The equipment and the methodology for aqueous two-phase extraction of proteins on a large scale using mixer-settlerand column contractors are described. The operation of the columns, either stagewise or differential, are summarized. A brief description of the methods used to account for mass transfer coefficients, hydrodynamics parameters of hold-up, drop size, and velocity, back mixing in the phases, and flooding performance, required for column design, is also provided. PMID:11876297

  9. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    PubMed

    Nogaret, Alain; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  10. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  11. A Nonlinear Fuel Optimal Reaction Jet Control Law

    SciTech Connect

    Breitfeller, E.; Ng, L.C.

    2002-06-30

    We derive a nonlinear fuel optimal attitude control system (ACS) that drives the final state to the desired state according to a cost function that weights the final state angular error relative to the angular rate error. Control is achieved by allowing the pulse-width-modulated (PWM) commands to begin and end anywhere within a control cycle, achieving a pulse width pulse time (PWPT) control. We show through a MATLAB{reg_sign} Simulink model that this steady-state condition may be accomplished, in the absence of sensor noise or model uncertainties, with the theoretical minimum number of actuator cycles. The ability to analytically achieve near-zero drift rates is particularly important in applications such as station-keeping and sensor imaging. Consideration is also given to the fact that, for relatively small sensor and model errors, the controller requires significantly fewer actuator cycles to reach the final state error than a traditional proportional-integral-derivative (PID) controller. The optimal PWPT attitude controller may be applicable for a high performance kinetic energy kill vehicle.

  12. Large Scale Bacterial Colony Screening of Diversified FRET Biosensors

    PubMed Central

    Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver

    2015-01-01

    Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878

  13. Dispersion optimization of nonlinear glass photonic crystal fibers and impact of fabrication tolerances on their telecom nonlinear applications performance

    NASA Astrophysics Data System (ADS)

    Kanka, Jiri

    2009-05-01

    For most telecom nonlinear applications a high effective nonlinearity, low group velocity dispersion with a low dispersion slope and a short fibre length are the key parameters. Combining photonic crystal fibre (PCF) technology with highly nonlinear glasses could meet these requirements very well. We have performed dispersion optimization of PCFs made from selected nonlinear glasses with a solid core and small number of hexagonally arrayed air holes. The optimization procedure employs the Nelder-Mead downhill simplex algorithm. For the modal analysis of the photonic crystal fibre structure a fully-vectorial mode solver based on the finite element method is used. We have obtained two types of dispersion optimized nonlinear PCF designs: PCFs of the first type are single-mode and highly nonlinear with a small and flattened dispersion in the 1500-1600 nm range. These PCF structures have air holes hexagonally arrayed in from 3 to 5 rings, however, their dispersion characteristics are very sensitive to variations in structural parameters. PCFs of the second type are two-ring PCFs with larger multi-mode cores. They have fundamental mode's zero dispersion wavelength around 1550 nm with non-zero moderate dispersion slopes which are less sensitive to structural variation. It is supposed that this alternative PCF design will be easier to fabricate. The effects of fabrication imprecision on the dispersion characteristics for both PCF designs are demonstrated numerically and discussed in the context of nonlinear telecom applications.

  14. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  15. Lateral stirring of large-scale tracer fields by altimetry

    NASA Astrophysics Data System (ADS)

    Dencausse, Guillaume; Morrow, Rosemary; Rogé, Marine; Fleury, Sara

    2014-01-01

    Ocean surface fronts and filaments have a strong impact on the global ocean circulation and biogeochemistry. Surface Lagrangian advection with time-evolving altimetric geostrophic velocities can be used to simulate the submesoscale front and filament structures in large-scale tracer fields. We study this technique in the Southern Ocean region south of Tasmania, a domain marked by strong meso- to submesoscale features such as the fronts of the Antarctic Circumpolar Current (ACC). Starting with large-scale surface tracer fields that we stir with altimetric velocities, we determine `advected' fields which compare well with high-resolution in situ or satellite tracer data. We find that fine scales are best represented in a statistical sense after an optimal advection time of ˜2 weeks, with enhanced signatures of the ACC fronts and better spectral energy. The technique works best in moderate to high EKE regions where lateral advection dominates. This technique may be used to infer the distribution of unresolved small scales in any physical or biogeochemical surface tracer that is dominated by lateral advection. Submesoscale dynamics also impact the subsurface of the ocean, and the Lagrangian advection at depth shows promising results. Finally, we show that climatological tracer fields computed from the advected large-scale fields display improved fine-scale mean features, such as the ACC fronts, which can be useful in the context of ocean modelling.

  16. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  17. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  18. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  19. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  20. Nonthermal Components in the Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco

    2004-12-01

    I address the issue of nonthermal processes in the large scale structure of the universe. After reviewing the properties of cosmic shocks and their role as particle accelerators, I discuss the main observational results, from radio to γ-ray and describe the processes that are thought be responsible for the observed nonthermal emissions. Finally, I emphasize the important role of γ-ray astronomy for the progress in the field. Non detections at these photon energies have already allowed us important conclusions. Future observations will tell us more about the physics of the intracluster medium, shocks dissipation and CR acceleration.

  1. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  2. Discrete-time neural inverse optimal control for nonlinear systems via passivation.

    PubMed

    Ornelas-Tellez, Fernando; Sanchez, Edgar N; Loukianov, Alexander G

    2012-08-01

    This paper presents a discrete-time inverse optimal neural controller, which is constituted by combination of two techniques: 1) inverse optimal control to avoid solving the Hamilton-Jacobi-Bellman equation associated with nonlinear system optimal control and 2) on-line neural identification, using a recurrent neural network trained with an extended Kalman filter, in order to build a model of the assumed unknown nonlinear system. The inverse optimal controller is based on passivity theory. The applicability of the proposed approach is illustrated via simulations for an unstable nonlinear system and a planar robot. PMID:24807528

  3. Haar wavelet operational matrix method for solving constrained nonlinear quadratic optimal control problem

    NASA Astrophysics Data System (ADS)

    Swaidan, Waleeda; Hussin, Amran

    2015-10-01

    Most direct methods solve finite time horizon optimal control problems with nonlinear programming solver. In this paper, we propose a numerical method for solving nonlinear optimal control problem with state and control inequality constraints. This method used quasilinearization technique and Haar wavelet operational matrix to convert the nonlinear optimal control problem into a quadratic programming problem. The linear inequality constraints for trajectories variables are converted to quadratic programming constraint by using Haar wavelet collocation method. The proposed method has been applied to solve Optimal Control of Multi-Item Inventory Model. The accuracy of the states, controls and cost can be improved by increasing the Haar wavelet resolution.

  4. Aristos Optimization Package

    Energy Science and Technology Software Center (ESTSC)

    2007-03-01

    Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the areamore » of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.« less

  5. Large-scale Globally Propagating Coronal Waves

    NASA Astrophysics Data System (ADS)

    Warmuth, Alexander

    2015-09-01

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  6. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  7. Optimal packaging of dispersion-compensating fibers for matched nonlinear compensation and reduced optical noise.

    PubMed

    Wei, Haiqing; Plant, David V

    2005-09-15

    A method of packaging dispersion-compensating fibers (DCFs) is discussed that achieves optimal nonlinearity compensation and a good signal-to-noise ratio simultaneously. An optimally packaged dispersion-compensating module (DCM) may consist of portions of DCFs with higher and lower loss coefficients. Such optimized DCMs may be paired with transmission fibers to form scaled translation-symmetric lines that could effectively compensate for signal distortions due to dispersion and nonlinearity, with or without optical phase conjugation. PMID:16196322

  8. System design optimization for a Mars-roving vehicle and perturbed-optimal solutions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Pavarini, C.

    1974-01-01

    Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.

  9. Symposium on Parallel Computational Methods for Large-scale Structural Analysis and Design, 2nd, Norfolk, VA, US

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)

    1993-01-01

    Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.

  10. Large scale reconstruction of the solar coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Amari, T.; Aly, J.-J.; Chopin, P.; Canou, A.; Mikic, Z.

    2014-10-01

    It is now becoming necessary to access the global magnetic structure of the solar low corona at a large scale in order to understand its physics and more particularly the conditions of energization of the magnetic fields and the multiple connections between distant active regions (ARs) which may trigger eruptive events in an almost coordinated way. Various vector magnetographs, either on board spacecraft or ground-based, currently allow to obtain vector synoptic maps, composite magnetograms made of multiple interactive ARs, and full disk magnetograms. We present a method recently developed for reconstructing the global solar coronal magnetic field as a nonlinear force-free magnetic field in spherical geometry, generalizing our previous results in Cartesian geometry. This method is implemented in the new code XTRAPOLS, which thus appears as an extension of our active region scale code XTRAPOL. We apply our method by performing a reconstruction at a specific time for which we dispose of a set of composite data constituted of a vector magnetogram provided by SDO/HMI, embedded in a larger full disk vector magnetogram provided by the same instrument, finally embedded in a synoptic map provided by SOLIS. It turns out to be possible to access the large scale structure of the corona and its energetic contents, and also the AR scale, at which we recover the presence of a twisted flux rope in equilibrium.

  11. Simulating the large-scale structure of HI intensity maps

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  12. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  13. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  14. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  15. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  16. Estimation of large-scale dimension densities.

    PubMed

    Raab, C; Kurths, J

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor. PMID:11461376

  17. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  18. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  19. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  20. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  1. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  2. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  3. Estimation of large-scale dimension densities

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Kurths, Jürgen

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor.

  4. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  5. MODELING THE LARGE-SCALE BIAS OF NEUTRAL HYDROGEN

    SciTech Connect

    MarIn, Felipe A.; Gnedin, Nickolay Y.; Seo, Hee-Jong; Vallinotto, Alberto E-mail: gnedin@fnal.go E-mail: avalli@fnal.go

    2010-08-01

    We present new analytical estimates of the large-scale bias of neutral hydrogen (H I). We use a simple, non-parametric model which monotonically relates the total mass of a halo M{sub tot} with its H I mass M{sub HI} at zero redshift; for earlier times we assume limiting models for the {Omega}{sub HI} evolution consistent with the data presently available, as well as two main scenarios for the evolution of our M{sub HI}-M{sub tot} relation. We find that both the linear and the first nonlinear bias terms exhibit a strong evolution with redshift, regardless of the specific limiting model assumed for the H I density over time. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the H I power spectrum.

  6. Nonzero Density-Velocity Consistency Relations for Large Scale Structures.

    PubMed

    Rizzo, Luca Alberto; Mota, David F; Valageas, Patrick

    2016-08-19

    We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias. PMID:27588842

  7. Nonzero Density-Velocity Consistency Relations for Large Scale Structures

    NASA Astrophysics Data System (ADS)

    Rizzo, Luca Alberto; Mota, David F.; Valageas, Patrick

    2016-08-01

    We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias.

  8. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  9. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  10. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-02-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  11. Large-scale Ising spin network based on degenerate optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Inagaki, Takahiro; Inaba, Kensuke; Hamerly, Ryan; Inoue, Kyo; Yamamoto, Yoshihisa; Takesue, Hiroki

    2016-06-01

    Solving combinatorial optimization problems is becoming increasingly important in modern society, where the analysis and optimization of unprecedentedly complex systems are required. Many such problems can be mapped onto the ground-state-search problem of the Ising Hamiltonian, and simulating the Ising spins with physical systems is now emerging as a promising approach for tackling such problems. Here, we report a large-scale network of artificial spins based on degenerate optical parametric oscillators (DOPOs), paving the way towards a photonic Ising machine capable of solving difficult combinatorial optimization problems. We generate >10,000 time-division-multiplexed DOPOs using dual-pump four-wave mixing in a highly nonlinear fibre placed in a cavity. Using those DOPOs, a one-dimensional Ising model is simulated by introducing nearest-neighbour optical coupling. We observe the formation of spin domains and find that the domain size diverges near the DOPO threshold, which suggests that the DOPO network can simulate the behaviour of low-temperature Ising spins.

  12. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  13. A Nonlinear Physics-Based Optimal Control Method for Magnetostrictive Actuators

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.

    1998-01-01

    This paper addresses the development of a nonlinear optimal control methodology for magnetostrictive actuators. At moderate to high drive levels, the output from these actuators is highly nonlinear and contains significant magnetic and magnetomechanical hysteresis. These dynamics must be accommodated by models and control laws to utilize the full capabilities of the actuators. A characterization based upon ferromagnetic mean field theory provides a model which accurately quantifies both transient and steady state actuator dynamics under a variety of operating conditions. The control method consists of a linear perturbation feedback law used in combination with an optimal open loop nonlinear control. The nonlinear control incorporates the hysteresis and nonlinearities inherent to the transducer and can be computed offline. The feedback control is constructed through linearization of the perturbed system about the optimal system and is efficient for online implementation. As demonstrated through numerical examples, the combined hybrid control is robust and can be readily implemented in linear PDE-based structural models.

  14. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  15. Application of conditional nonlinear optimal perturbation method to finding the optimal precursors of Kuroshio large meander

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Mu, M.; Dijkstra, H. A.

    2012-04-01

    We use the conditional nonlinear optimal perturbation (CNOP) approach to find optimal precursor of the formation of Kuroshio large meander (LM) path. Three non-large meander (NLM) states are utilized as reference states to calculate the CNOPs. The results demonstrate that the CNOPs can result in the formation of the significant LM path. Simultaneously, we calculate the first singular vector (FSV), which is the linear counterpart of CNOP, and investigate its effects on the Kuroshio path. We found that the FSV with the same amplitude as the CNOP does not trigger a typical Kuroshio LM path. Hence, the CNOP is regarded as an optimal precursor of the formation of the LM path. Furthermore, we analyze the formation processes of the LM path and find that potential vorticity (PV) advection plays an important role in the formation process. The PV advection caused by the FSV perturbation is smaller than that caused by the CNOP perturbation and hence explains why the CNOP is favorable as a precursor over the FSV.

  16. Simultaneous modeling and optimization of nonlinear simulated moving bed chromatography by the prediction-correction method.

    PubMed

    Bentley, Jason; Sloan, Charlotte; Kawajiri, Yoshiaki

    2013-03-01

    This work demonstrates a systematic prediction-correction (PC) method for simultaneously modeling and optimizing nonlinear simulated moving bed (SMB) chromatography. The PC method uses model-based optimization, SMB startup data, isotherm model selection, and parameter estimation to iteratively refine model parameters and find optimal operating conditions in a matter of hours to ensure high purity constraints and achieve optimal productivity. The PC algorithm proceeds until the SMB process is optimized without manual tuning. In case studies, it is shown that a nonlinear isotherm model and parameter values are determined reliably using SMB startup data. In one case study, a nonlinear SMB system is optimized after only two changes of operating conditions following the PC algorithm. The refined isotherm models are validated by frontal analysis and perturbation analysis. PMID:23380364

  17. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  18. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  19. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  20. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  1. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  2. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  3. Large Scale Computer Simulation of Erthocyte Membranes

    NASA Astrophysics Data System (ADS)

    Harvey, Cameron; Revalee, Joel; Laradji, Mohamed

    2007-11-01

    The cell membrane is crucial to the life of the cell. Apart from partitioning the inner and outer environment of the cell, they also act as a support of complex and specialized molecular machinery, important for both the mechanical integrity of the cell, and its multitude of physiological functions. Due to its relative simplicity, the red blood cell has been a favorite experimental prototype for investigations of the structural and functional properties of the cell membrane. The erythrocyte membrane is a composite quasi two-dimensional structure composed essentially of a self-assembled fluid lipid bilayer and a polymerized protein meshwork, referred to as the cytoskeleton or membrane skeleton. In the case of the erythrocyte, the polymer meshwork is mainly composed of spectrin, anchored to the bilayer through specialized proteins. Using a coarse-grained model, recently developed by us, of self-assembled lipid membranes with implicit solvent and using soft-core potentials, we simulated large scale red-blood-cells bilayers with dimensions ˜ 10-1 μm^2, with explicit cytoskeleton. Our aim is to investigate the renormalization of the elastic properties of the bilayer due to the underlying spectrin meshwork.

  4. Optimal feedback control of strongly non-linear systems excited by bounded noise

    NASA Astrophysics Data System (ADS)

    Zhu, W. Q.; Huang, Z. L.; Ko, J. M.; Ni, Y. Q.

    2004-07-01

    A strategy for non-linear stochastic optimal control of strongly non-linear systems subject to external and/or parametric excitations of bounded noise is proposed. A stochastic averaging procedure for strongly non-linear systems under external and/or parametric excitations of bounded noise is first developed. Then, the dynamical programming equation for non-linear stochastic optimal control of the system is derived from the averaged Itô equations by using the stochastic dynamical programming principle and solved to yield the optimal control law. The Fokker-Planck-Kolmogorov equation associated with the fully completed averaged Itô equations is solved to give the response of optimally controlled system. The application and effectiveness of the proposed control strategy are illustrated with the control of cable vibration in cable-stayed bridges and the feedback stabilization of the cable under parametric excitation of bounded noise.

  5. Aircraft design for mission performance using nonlinear multiobjective optimization methods

    NASA Technical Reports Server (NTRS)

    Dovi, Augustine R.; Wrenn, Gregory A.

    1990-01-01

    A new technique which converts a constrained optimization problem to an unconstrained one where conflicting figures of merit may be simultaneously considered was combined with a complex mission analysis system. The method is compared with existing single and multiobjective optimization methods. A primary benefit from this new method for multiobjective optimization is the elimination of separate optimizations for each objective, which is required by some optimization methods. A typical wide body transport aircraft is used for the comparative studies.

  6. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  7. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  8. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  9. Parallel block schemes for large scale least squares computations

    SciTech Connect

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment of the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.

  10. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  11. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  12. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  13. Biased galaxy formation and large-scale structure

    NASA Astrophysics Data System (ADS)

    Berlind, Andreas Alan

    The biased relation between the galaxy and mass distributions lies at the intersection of large scale structure in the universe and the process of galaxy formation. I study the nature of galaxy bias and its connections to galaxy clustering and galaxy formation physics. Galaxy bias has traditionally been viewed as an obstacle to constraining cosmological parameters by studying galaxy clustering. I examine the effect of bias on measurements of the cosmological density parameter Wm by techniques that exploit the gravity-induced motions of galaxies. Using a variety of environmental bias models applied to N-body simulations, I find that, in most cases, the quantity estimated by these techniques is the value of W0.6m/bs , where bs is the ratio of rms galaxy fluctuations to rms mass fluctuations on large scales. Moreover, I find that different methods should, in principle, agree with each other and it is thus unlikely that non-linear or scale-dependent bias is responsible for the discrepancies that exist among current measurements. One can also view the influence of bias on galaxy clustering as a strength rather than a weakness, since it provides us with a potentially powerful way to constrain galaxy formation theories. With this goal in mind, I develop the "Halo Occupation Distribution" (HOD), a physically motivated and complete formulation of bias that is based on the distribution of galaxies within virialized dark matter halos. I explore the sensitivity of galaxy clustering statistics to features of the HOD and focus on how the HOD may be empirically constrained from galaxy clustering data. I make the connection to the physics of galaxy formation by studying the HOD predicted by the two main theoretical methods of modeling galaxy formation. I find that, despite many differences between them, the two methods predict the same HOD, suggesting that galaxy bias is determined by robust features of the hierarchical galaxy formation process rather than details of gas cooling

  14. A mini review: photobioreactors for large scale algal cultivation.

    PubMed

    Gupta, Prabuddha L; Lee, Seung-Mok; Choi, Hee-Jeong

    2015-09-01

    Microalgae cultivation has gained much interest in terms of the production of foods, biofuels, and bioactive compounds and offers a great potential option for cleaning the environment through CO2 sequestration and wastewater treatment. Although open pond cultivation is most affordable option, there tends to be insufficient control on growth conditions and the risk of contamination. In contrast, while providing minimal risk of contamination, closed photobioreactors offer better control on culture conditions, such as: CO2 supply, water supply, optimal temperatures, efficient exposure to light, culture density, pH levels, and mixing rates. For a large scale production of biomass, efficient photobioreactors are required. This review paper describes general design considerations pertaining to photobioreactor systems, in order to cultivate microalgae for biomass production. It also discusses the current challenges in designing of photobioreactors for the production of low-cost biomass. PMID:26085485

  15. Large-scale asymmetric synthesis of a cathepsin S inhibitor.

    PubMed

    Lorenz, Jon C; Busacca, Carl A; Feng, XuWu; Grinberg, Nelu; Haddad, Nizar; Johnson, Joe; Kapadia, Suresh; Lee, Heewon; Saha, Anjan; Sarvestani, Max; Spinelli, Earl M; Varsolona, Rich; Wei, Xudong; Zeng, Xingzhong; Senanayake, Chris H

    2010-02-19

    A potent reversible inhibitor of the cysteine protease cathepsin-S was prepared on large scale using a convergent synthetic route, free of chromatography and cryogenics. Late-stage peptide coupling of a chiral urea acid fragment with a functionalized aminonitrile was employed to prepare the target, using 2-hydroxypyridine as a robust, nonexplosive replacement for HOBT. The two key intermediates were prepared using a modified Strecker reaction for the aminonitrile and a phosphonation-olefination-rhodium-catalyzed asymmetric hydrogenation sequence for the urea. A palladium-catalyzed vinyl transfer coupled with a Claisen reaction was used to produce the aldehyde required for the side chain. Key scale up issues, safety calorimetry, and optimization of all steps for multikilogram production are discussed. PMID:20102230

  16. Atypical Behavior Identification in Large Scale Network Traffic

    SciTech Connect

    Best, Daniel M.; Hafen, Ryan P.; Olsen, Bryan K.; Pike, William A.

    2011-10-23

    Cyber analysts are faced with the daunting challenge of identifying exploits and threats within potentially billions of daily records of network traffic. Enterprise-wide cyber traffic involves hundreds of millions of distinct IP addresses and results in data sets ranging from terabytes to petabytes of raw data. Creating behavioral models and identifying trends based on those models requires data intensive architectures and techniques that can scale as data volume increases. Analysts need scalable visualization methods that foster interactive exploration of data and enable identification of behavioral anomalies. Developers must carefully consider application design, storage, processing, and display to provide usability and interactivity with large-scale data. We present an application that highlights atypical behavior in enterprise network flow records. This is accomplished by utilizing data intensive architectures to store the data, aggregation techniques to optimize data access, statistical techniques to characterize behavior, and a visual analytic environment to render the behavioral trends, highlight atypical activity, and allow for exploration.

  17. Modeling and Dynamic Simulation of a Large Scale Helium Refrigerator

    NASA Astrophysics Data System (ADS)

    Lv, C.; Qiu, T. N.; Wu, J. H.; Xie, X. J.; Li, Q.

    In order to simulate the transient behaviors of a newly developed 2 kW helium refrigerator, a numerical model of the critical equipment including a screw compressor with variable-frequency drive, plate-fin heat exchangers, a turbine expander, and pneumatic valves wasdeveloped. In the simulation,the calculation of the helium thermodynamic properties arebased on 32-parameter modified Benedict-Webb-Rubin (MBWR) state equation.The start-up process of the warm compressor station with gas management subsystem, and the cool-down process of cold box in an actual operation, were dynamically simulated. The developed model was verified by comparing the simulated results with the experimental data.Besides, system responses of increasing heat load were simulated. This model can also be used to design and optimize other large scale helium refrigerators.

  18. Large scale structure from viscous dark matter

    NASA Astrophysics Data System (ADS)

    Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-11-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale km for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale km, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.

  19. Optimal control for unknown discrete-time nonlinear Markov jump systems using adaptive dynamic programming.

    PubMed

    Zhong, Xiangnan; He, Haibo; Zhang, Huaguang; Wang, Zhanshan

    2014-12-01

    In this paper, we develop and analyze an optimal control method for a class of discrete-time nonlinear Markov jump systems (MJSs) with unknown system dynamics. Specifically, an identifier is established for the unknown systems to approximate system states, and an optimal control approach for nonlinear MJSs is developed to solve the Hamilton-Jacobi-Bellman equation based on the adaptive dynamic programming technique. We also develop detailed stability analysis of the control approach, including the convergence of the performance index function for nonlinear MJSs and the existence of the corresponding admissible control. Neural network techniques are used to approximate the proposed performance index function and the control law. To demonstrate the effectiveness of our approach, three simulation studies, one linear case, one nonlinear case, and one single link robot arm case, are used to validate the performance of the proposed optimal control method. PMID:25420238

  20. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  1. Nonlinear Mathematical Programming for Optimal Management of Container Terminals

    NASA Astrophysics Data System (ADS)

    Seyedalizadeh Ganji, S. R.; Javanshir, H.; Vaseghi, F.

    Berth scheduling is the process of determining the time and position at which each arriving ship will berth. This paper attempts to minimize the serving time to ships, after introducing a proposed mathematical model, considers the berth allocation problem in form of mixed integer nonlinear programming. Then, to credit the proposed model, the results of Imai et al.'s model have been used. The results indicate that because the number of nonlinear variables in the proposed model is less than prior model, so by using the proposed model, we can obtain the results of model in less time rather than prior model.

  2. Geospatial optimization of siting large-scale solar projects

    USGS Publications Warehouse

    Macknick, Jordan; Quinby, Ted; Caulfield, Emmet; Gerritsen, Margot; Diffendorfer, James E.; Haines, Seth S.

    2014-01-01

    guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  3. Optimization of Large Scale HEP Data Analysis in LHCb

    NASA Astrophysics Data System (ADS)

    Remenska, Daniela; Aaij, Roel; Raven, Gerhard; Merk, Marcel; Templon, Jeff; Bril, Reinder J.; LHCb Collaboration

    2011-12-01

    Observation has lead to a conclusion that the physics analysis jobs run by LHCb physicists on a local computing farm (i.e. non-grid) require more efficient access to the data which resides on the Grid. Our experiments have shown that the I/O bound nature of the analysis jobs in combination with the latency due to the remote access protocols (e.g. rfio, dcap) cause a low CPU efficiency of these jobs. In addition to causing a low CPU efficiency, the remote access protocols give rise to high overhead (in terms of amount of data transferred). This paper gives an overview of the concept of pre-fetching and caching of input files in the proximity of the processing resources, which is exploited to cope with the I/O bound analysis jobs. The files are copied from Grid storage elements (using GridFTP), while concurrently performing computations, inspired from a similar idea used in the ATLAS experiment. The results illustrate that this file staging approach is relatively insensitive to the original location of the data, and a significant improvement can be achieved in terms of the CPU efficiency of an analysis job. Dealing with scalability of such a solution on the Grid environment is discussed briefly.

  4. Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kania, Adhe; Sidarto, Kuntjoro Adji

    2016-02-01

    Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.

  5. Large-scale linear rankSVM.

    PubMed

    Lee, Ching-Pei; Lin, Chih-Jen

    2014-04-01

    Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use. PMID:24479776

  6. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  7. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  8. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  9. Lossless Convexification of Control Constraints for a Class of Nonlinear Optimal Control Problems

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars; Acikmese, Behcet; Carson, John M.,III

    2012-01-01

    In this paper we consider a class of optimal control problems that have continuous-time nonlinear dynamics and nonconvex control constraints. We propose a convex relaxation of the nonconvex control constraints, and prove that the optimal solution to the relaxed problem is the globally optimal solution to the original problem with nonconvex control constraints. This lossless convexification enables a computationally simpler problem to be solved instead of the original problem. We demonstrate the approach in simulation with a planetary soft landing problem involving a nonlinear gravity field.

  10. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  11. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  12. Wavefront optimized nonlinear microscopy of ex vivo human retinas

    NASA Astrophysics Data System (ADS)

    Gualda, Emilio J.; Bueno, Juan M.; Artal, Pablo

    2010-03-01

    A multiphoton microscope incorporating a Hartmann-Shack (HS) wavefront sensor to control the ultrafast laser beam's wavefront aberrations has been developed. This instrument allowed us to investigate the impact of the laser beam aberrations on two-photon autofluorescence imaging of human retinal tissues. We demonstrated that nonlinear microscopy images are improved when laser beam aberrations are minimized by realigning the laser system cavity while wavefront controlling. Nonlinear signals from several human retinal anatomical features have been detected for the first time, without the need of fixation or staining procedures. Beyond the improved image quality, this approach reduces the required excitation power levels, minimizing the side effects of phototoxicity within the imaged sample. In particular, this may be important to study the physiology and function of the healthy and diseased retina.

  13. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  14. Optimization of a finite difference method for nonlinear wave equations

    NASA Astrophysics Data System (ADS)

    Chen, Miaochao

    2013-07-01

    Wave equations have important fluid dynamics background, which are extensively used in many fields, such as aviation, meteorology, maritime, water conservancy, etc. This paper is devoted to the explicit difference method for nonlinear wave equations. Firstly, a three-level and explicit difference scheme is derived. It is shown that the explicit difference scheme is uniquely solvable and convergent. Moreover, a numerical experiment is conducted to illustrate the theoretical results of the presented method.

  15. A Large Scale Virtual Gas Sensor Array

    NASA Astrophysics Data System (ADS)

    Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre

    2011-09-01

    This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.

  16. Precision Measurement of Large Scale Structure

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2001-01-01

    The purpose of this grant was to develop and to start to apply new precision methods for measuring the power spectrum and redshift distortions from the anticipated new generation of large redshift surveys. A highlight of work completed during the award period was the application of the new methods developed by the PI to measure the real space power spectrum and redshift distortions of the IRAS PSCz survey, published in January 2000. New features of the measurement include: (1) measurement of power over an unprecedentedly broad range of scales, 4.5 decades in wavenumber, from 0.01 to 300 h/Mpc; (2) at linear scales, not one but three power spectra are measured, the galaxy-galaxy, galaxy-velocity, and velocity-velocity power spectra; (3) at linear scales each of the three power spectra is decorrelated within itself, and disentangled from the other two power spectra (the situation is analogous to disentangling scalar and tensor modes in the Cosmic Microwave Background); and (4) at nonlinear scales the measurement extracts not only the real space power spectrum, but also the full line-of-sight pairwise velocity distribution in redshift space.

  17. Software for large scale tracking studies

    SciTech Connect

    Niederer, J.

    1984-05-01

    Over the past few years, Brookhaven accelerator physicists have been adapting particle tracking programs in planning local storage rings, and lately for SSC reference designs. In addition, the Laboratory is actively considering upgrades to its AGS capabilities aimed at higher proton intensity, polarized proton beams, and heavy ion acceleration. Further activity concerns heavy ion transfer, a proposed booster, and most recently design studies for a heavy ion collider to join to this complex. Circumstances have thus encouraged a search for common features among design and modeling programs and their data, and the corresponding controls efforts among present and tentative machines. Using a version of PATRICIA with nonlinear forces as a vehicle, we have experimented with formal ways to describe accelerator lattice problems to computers as well as to speed up the calculations for large storage ring models. Code treated by straightforward reorganization has served for SSC explorations. The representation work has led to a relational data base centered program, LILA, which has desirable properties for dealing with the many thousands of rapidly changing variables in tracking and other model programs. 13 references.

  18. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  19. Autonomic Computing Paradigm For Large Scale Scientific And Engineering Applications

    NASA Astrophysics Data System (ADS)

    Hariri, S.; Yang, J.; Zhang, Y.

    2005-12-01

    Large-scale distributed scientific applications are highly adaptive and heterogeneous in terms of their computational requirements. The computational complexity associated with each computational region or domain varies continuously and dramatically both in space and time throughout the whole life cycle of the application execution. Furthermore, the underlying distributed computing environment is similarly complex and dynamic in the availabilities and capacities of the computing resources. These challenges combined together make the current paradigms, which are based on passive components and static compositions, ineffectual. Autonomic Computing paradigm is an approach that efficiently addresses the complexity and dynamism of large scale scientific and engineering applications and realizes the self-management of these applications. In this presentation, we present an Autonomic Runtime Manager (ARM) that supports the development of autonomic applications. The ARM includes two modules: online monitoring and analysis module and autonomic planning and scheduling module. The ARM behaves as a closed-loop control system that dynamically controls and manages the execution of the applications at runtime. It regularly senses the state changes of both the applications and the underlying computing resources. It then uses these runtime information and prior knowledge about the application behavior and its physics to identify the appropriate solution methods as well as the required computing and storage resources. Consequently this approach enables us to develop autonomic applications, which are capable of self-management and self-optimization. We have developed and implemented the autonomic computing paradigms for several large scale applications such as wild fire simulations, simulations of flow through variably saturated geologic formations, and life sciences. The distributed wildfire simulation models the wildfire spread behavior by considering such factors as fuel

  20. Nonlinear optimals in the asymptotic suction boundary layer: Transition thresholds and symmetry breaking

    NASA Astrophysics Data System (ADS)

    Cherubini, S.; De Palma, P.; Robinet, J.-Ch.

    2015-03-01

    The effect of a constant homogeneous wall suction on the nonlinear transient growth of localized finite amplitude perturbations in a boundary-layer flow is investigated. Using a variational technique, nonlinear optimal disturbances are computed for the asymptotic suction boundary layer (ASBL) flow, defined as those finite amplitude disturbances yielding the largest energy growth at a given target time T. It is found that homogeneous wall suction remarkably reduces the optimal energy gain in the nonlinear case. Furthermore, mirror-symmetry breaking of the shape of the optimal perturbation appears when decreasing the Reynolds number from 10 000 to 5000, whereas spanwise mirror-symmetry was a robust feature of the nonlinear optimal perturbations found in the Blasius boundary-layer flow. Direct numerical simulations show that the different evolutions of the symmetric and of the non-symmetric initial perturbations are linked to different mechanisms of transport and tilting of the vortices by the mean flow. By bisecting the initial energy of the nonlinear optimal perturbations, minimal energy thresholds for subcritical transition to turbulence have been obtained. These energy thresholds are found to be 1-4 orders of magnitude smaller than those provided in the literature for other transition scenarios. For low to moderate Reynolds numbers, the energy thresholds are found to scale with Re-2, suggesting a new scaling law for transition in the ASBL.

  1. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  2. Large scale electromechanical transistor with application in mass sensing

    SciTech Connect

    Jin, Leisheng; Li, Lijie

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibration—an external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  3. Bias in the effective field theory of large scale structures

    SciTech Connect

    Senatore, Leonardo

    2015-11-05

    We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local in space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. Furthermore, we describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/kNL and k/kM, where k is the wavenumber of interest, kNL is the wavenumber associated to the non-linear scale, and kM is the comoving wavenumber enclosing the mass of a galaxy.

  4. Bias in the effective field theory of large scale structures

    DOE PAGESBeta

    Senatore, Leonardo

    2015-11-05

    We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local inmore » space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. Furthermore, we describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/kNL and k/kM, where k is the wavenumber of interest, kNL is the wavenumber associated to the non-linear scale, and kM is the comoving wavenumber enclosing the mass of a galaxy.« less

  5. Bias in the effective field theory of large scale structures

    NASA Astrophysics Data System (ADS)

    Senatore, Leonardo

    2015-11-01

    We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local in space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. We describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/kNL and k/kM, where k is the wavenumber of interest, kNL is the wavenumber associated to the non-linear scale, and kM is the comoving wavenumber enclosing the mass of a galaxy.

  6. Decentralization, stabilization, and estimation of large-scale linear systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Vukcevic, M. B.

    1976-01-01

    In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.

  7. Optimal nonlinear estimation for aircraft flight control in wind shear

    NASA Technical Reports Server (NTRS)

    Mulgund, Sandeep S.

    1994-01-01

    The most recent results in an ongoing research effort at Princeton in the area of flight dynamics in wind shear are described. The first undertaking in this project was a trajectory optimization study. The flight path of a medium-haul twin-jet transport aircraft was optimized during microburst encounters on final approach. The assumed goal was to track a reference climb rate during an aborted landing, subject to a minimum airspeed constraint. The results demonstrated that the energy loss through the microburst significantly affected the qualitative nature of the optimal flight path. In microbursts of light to moderate strength, the aircraft was able to track the reference climb rate successfully. In severe microbursts, the minimum airspeed constraint in the optimization forced the aircraft to settle on a climb rate smaller than the target. A tradeoff was forced between the objectives of flight path tracking and stall prevention.

  8. Sufficient observables for large-scale structure in galaxy surveys

    NASA Astrophysics Data System (ADS)

    Carron, J.; Szapudi, I.

    2014-03-01

    Beyond the linear regime, the power spectrum and higher order moments of the matter field no longer capture all cosmological information encoded in density fluctuations. While non-linear transforms have been proposed to extract this information lost to traditional methods, up to now, the way to generalize these techniques to discrete processes was unclear; ad hoc extensions had some success. We pointed out in Carron and Szapudi's paper that the logarithmic transform approximates extremely well the optimal `sufficient statistics', observables that extract all information from the (continuous) matter field. Building on these results, we generalize optimal transforms to discrete galaxy fields. We focus our calculations on the Poisson sampling of an underlying lognormal density field. We solve and test the one-point case in detail, and sketch out the sufficient observables for the multipoint case. Moreover, we present an accurate approximation to the sufficient observables in terms of the mean and spectrum of a non-linearly transformed field. We find that the corresponding optimal non-linear transformation is directly related to the maximum a posteriori Bayesian reconstruction of the underlying continuous field with a lognormal prior as put forward in the paper of Kitaura et al.. Thus, simple recipes for realizing the sufficient observables can be built on previously proposed algorithms that have been successfully implemented and tested in simulations.

  9. Large-scale actuating performance analysis of a composite curved piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Chung, Soon Wan; Hwang, In Seong; Kim, Seung Jo

    2006-02-01

    In this paper, the electromechanical displacements of curved piezoelectric actuators composed of PZT ceramic and laminated composite materials are calculated on the basis of high performance computing technology and the optimal configuration of the composite curved actuator is examined. To accurately predict the local pre-stress in the device due to the mismatch in the coefficients of thermal expansion, carbon/epoxy and glass/epoxy as well as PZT ceramic are numerically modelled by using hexahedral solid elements. Because the modeling of these thin layers increases the number of degrees of freedom, large-scale structural analyses are performed using the PEGASUS supercomputer, which is installed in our laboratory. In the first stage, the curved shape of the actuator and the internal stress in each layer are obtained by cured curvature analysis. Subsequently, the displacement due to the piezoelectric force (which results from the applied voltage) is also calculated. The performance of the composite curved actuator is investigated by comparing the displacements obtained by variation of the thickness and the elastic modulus of laminated composite layers. In order to consider the finite deformation in the first stage of the analysis and include the pre-stress due to the curing process in the second stage, nonlinear finite element analyses are carried out.

  10. Optimized split-step method for modeling nonlinear pulse propagation in fiber Bragg gratings

    SciTech Connect

    Toroker, Zeev; Horowitz, Moshe

    2008-03-15

    We present an optimized split-step method for solving nonlinear coupled-mode equations that model wave propagation in nonlinear fiber Bragg gratings. By separately controlling the spatial and the temporal step size of the solution, we could significantly decrease the run time duration without significantly affecting the result accuracy. The accuracy of the method and the dependence of the error on the algorithm parameters are studied in several examples. Physical considerations are given to determine the required resolution.

  11. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  12. Large-scale recording of astrocyte activity

    PubMed Central

    Nimmerjahn, Axel; Bergles, Dwight E.

    2015-01-01

    Astrocytes are highly ramified glial cells found throughout the central nervous system (CNS). They express a variety of neurotransmitter receptors that can induce widespread chemical excitation, placing these cells in an optimal position to exert global effects on brain physiology. However, the activity patterns of only a small fraction of astrocytes have been examined and techniques to manipulate their behavior are limited. As a result, little is known about how astrocytes modulate CNS function on synaptic, microcircuit, or systems levels. Here, we review current and emerging approaches for visualizing and manipulating astrocyte activity in vivo. Deciphering how astrocyte network activity is controlled in different physiological and pathological contexts is critical for defining their roles in the healthy and diseased CNS. PMID:25665733

  13. A Decentralized Multivariable Robust Adaptive Voltage and Speed Regulator for Large-Scale Power Systems

    NASA Astrophysics Data System (ADS)

    Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick

    2013-05-01

    This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.

  14. Optimization of the dynamic behavior of strongly nonlinear heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Herbold, Eric B.

    New aspects of strongly nonlinear wave and structural phenomena in granular media are developed numerically, theoretically and experimentally. One-dimensional chains of particles and compressed powder composites are the two main types of materials considered here. Typical granular assemblies consist of linearly elastic spheres or layers of masses and effective nonlinear springs in one-dimensional columns for dynamic testing. These materials are highly sensitive to initial and boundary conditions, making them useful for acoustic and shock-mitigating applications. One-dimensional assemblies of spherical particles are examples of strongly nonlinear systems with unique properties. For example, if initially uncompressed, these materials have a sound speed equal to zero (sonic vacuum), supporting strongly nonlinear compression solitary waves with a finite width. Different types of assembled metamaterials will be presented with a discussion of the material's response to static compression. The acoustic diode effect will be presented, which may be useful in shock mitigation applications. Systems with controlled dissipation will also be discussed from an experimental and theoretical standpoint emphasizing the critical viscosity that defines the transition from an oscillatory to monotonous shock profile. The dynamic compression of compressed powder composites may lead to self-organizing mesoscale structures in two and three dimensions. A reactive granular material composed of a compressed mixture of polytetrafluoroethylene (PTFE), tungsten (W) and aluminum (Al) fine-grain powders exhibit this behavior. Quasistatic, Hopkinson bar, and drop-weight experiments show that composite materials with a high porosity and fine metallic particles exhibit a higher strength than less porous mixtures with larger particles, given the same mass fraction of constituents. A two-dimensional Eulerian hydrocode is implemented to investigate the mechanical deformation and failure of the compressed

  15. On stochastic optimal control of partially observable nonlinear quasi Hamiltonian systems.

    PubMed

    Zhu, Wei-qiu; Ying, Zu-guang

    2004-11-01

    A stochastic optimal control strategy for partially observable nonlinear quasi Hamiltonian systems is proposed. The optimal control forces consist of two parts. The first part is determined by the conditions under which the stochastic optimal control problem of a partially observable nonlinear system is converted into that of a completely observable linear system. The second part is determined by solving the dynamical programming equation derived by applying the stochastic averaging method and stochastic dynamical programming principle to the completely observable linear control system. The response of the optimally controlled quasi Hamiltonian system is predicted by solving the averaged Fokker-Planck-Kolmogorov equation associated with the optimally controlled completely observable linear system and solving the Riccati equation for the estimated error of system states. An example is given to illustrate the procedure and effectiveness of the proposed control strategy. PMID:15495321

  16. A stochastic optimal control strategy for partially observable nonlinear quasi-Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Ying, Z. G.; Zhu, W. Q.

    2008-02-01

    A stochastic optimal control strategy for partially observable nonlinear quasi-Hamiltonian systems is proposed. The optimal control force consists of two parts. The first part is determined by the conditions under which the stochastic optimal control problem of a partially observable nonlinear system is converted into that of a completely observable linear system. The second part is determined by solving the dynamical programming equation derived by applying the stochastic averaging method and stochastic dynamical programming principle to the completely observable linear control system. The response of the optimally controlled quasi-Hamiltonian system is predicted by solving the averaged Fokker-Planck-Kolmogorov equation associated with the optimally controlled completely observable linear system and solving the Riccati equation for the estimate errors of system states. An example is given to illustrate the procedure and effectiveness of the proposed control strategy.

  17. Effects of geometric nonlinearities on the response of optimized box beam structures

    NASA Technical Reports Server (NTRS)

    Ragon, S.; Gurdal, Z.

    1993-01-01

    The present minimum-mass designs for a two-spar rectangular box beam were derived on the basis of linear-buckling FEM analysis constraints. In order to ascertain the effects of any geometric nonlinearities on these designs, each was subjected to a geometrically nonlinear FEM analysis. In all cases, the structure collapses below the design load, and does so in a mode which differs from that of linear theory. This discrepancy is attributable to such nonlinear panel-interaction mechanisms as rib-crusing loads. The optimized design is highly sensitive to crushing loads, relative to the nonoptimal design.

  18. Application of numerical optimization techniques to control system design for nonlinear dynamic models of aircraft

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward; Ge, Fuying

    1989-01-01

    Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.

  19. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    PubMed

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results. PMID:26111400

  20. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  1. Large-scale structural monitoring systems

    NASA Astrophysics Data System (ADS)

    Solomon, Ian; Cunnane, James; Stevenson, Paul

    2000-06-01

    Extensive structural health instrumentation systems have been installed on three long-span cable-supported bridges in Hong Kong. The quantities measured include environment and applied loads (such as wind, temperature, seismic and traffic loads) and the bridge response to these loadings (accelerations, displacements, and strains). Measurements from over 1000 individual sensors are transmitted to central computing facilities via local data acquisition stations and a fault- tolerant fiber-optic network, and are acquired and processed continuously. The data from the systems is used to provide information on structural load and response characteristics, comparison with design, optimization of inspection, and assurance of continued bridge health. Automated data processing and analysis provides information on important structural and operational parameters. Abnormal events are noted and logged automatically. Information of interest is automatically archived for post-processing. Novel aspects of the instrumentation system include a fluid-based high-accuracy long-span Level Sensing System to measure bridge deck profile and tower settlement. This paper provides an outline of the design and implementation of the instrumentation system. A description of the design and implementation of the data acquisition and processing procedures is also given. Examples of the use of similar systems in monitoring other large structures are discussed.

  2. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean

  3. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities

  4. Nonlinear Comparison of High-Order and Optimized Finite-Difference Schemes

    NASA Technical Reports Server (NTRS)

    Hixon, R.

    1998-01-01

    The effect of reducing the formal order of accuracy of a finite-difference scheme in order to optimize its high-frequency performance is investigated using the I-D nonlinear unsteady inviscid Burgers'equation. It is found that the benefits of optimization do carry over into nonlinear applications. Both explicit and compact schemes are compared to Tam and Webb's explicit 7-point Dispersion Relation Preserving scheme as well as a Spectral-like compact scheme derived following Lele's work. Results are given for the absolute and L2 errors as a function of time.

  5. Optimal Fitting of Non-linear Detector Pulses with Nonstationary Noise

    NASA Technical Reports Server (NTRS)

    Fixsen, D. J.; Moseley, S. H.; Cabera, B.; Figueroa-Felicianco, E.; Oegerle, William (Technical Monitor)

    2002-01-01

    Optimal extraction of pulses of constant known shape from a time series with stationary noise is well understood and widely used in detection applications. Applications where high resolution is required over a wide range of input signal amplitudes use much of the dynamic range of the sensor. The noise will in general vary over this signal range, and the response may be a nonlinear function of the energy input. We present an optimal least squares procedure for inferring input energy in such a detector with nonstationary noise and nonlinear energy response.

  6. Lunar soft landing rapid trajectory optimization using direct collocation method and nonlinear programming

    NASA Astrophysics Data System (ADS)

    Tu, Lianghui; Yuan, Jianping; Luo, Jianjun; Ning, Xin; Zhou, Ruiwu

    2007-11-01

    Direct collocation method has been widely used for trajectory optimization. In this paper, the application of direct optimization method (direct collocation method & nonlinear programming (NLP)) to lunar probe soft-landing trajectory optimization is introduced. Firstly, the model of trajectory optimization control problem to lunar probe soft landing trajectory is established and the equations of motion are simplified respectively based on some reasonable hypotheses. Performance is selected to minimize the fuel consumption. The control variables are thrust attack angle and thrust of engine. Terminal state variable constraints are velocity and altitude constraints. Then, the optimal control problem is transformed into nonlinear programming problem using direct collocation method. The state variables and control variables are selected as optimal parameters at all nodes and collocation nodes. Parameter optimization problem is solved using the SNOPT software package. The simulation results demonstrate that the direct collocation method is not sensitive to lunar soft landing initial conditions; they also show that the optimal solutions of trajectory optimization problem are fairly good in real-time. Therefore, the direct collocation method is a viable approach to lunar probe soft landing trajectory optimization problem.

  7. Optimal nonlinear coherent mode transitions in Bose-Einstein condensates utilizing spatiotemporal controls

    NASA Astrophysics Data System (ADS)

    Hocker, David; Yan, Julia; Rabitz, Herschel

    2016-05-01

    Bose-Einstein condensates (BECs) offer the potential to examine quantum behavior at large length and time scales, as well as forming promising candidates for quantum technology applications. Thus, the manipulation of BECs using control fields is a topic of prime interest. We consider BECs in the mean-field model of the Gross-Pitaevskii equation (GPE), which contains linear and nonlinear features, both of which are subject to control. In this work we report successful optimal control simulations of a one-dimensional GPE by modulation of the linear and nonlinear terms to stimulate transitions into excited coherent modes. The linear and nonlinear controls are allowed to freely vary over space and time to seek their optimal forms. The determination of the excited coherent modes targeted for optimization is numerically performed through an adaptive imaginary time propagation method. Numerical simulations are performed for optimal control of mode-to-mode transitions between the ground coherent mode and the excited modes of a BEC trapped in a harmonic well. The results show greater than 99 % success for nearly all trials utilizing reasonable initial guesses for the controls, and analysis of the optimal controls reveals primarily direct transitions between initial and target modes. The success of using solely the nonlinearity term as a control opens up further research toward exploring novel control mechanisms inaccessible to linear Schrödinger-type systems.

  8. Toward the development of a Trust-Tech-based methodology for solving mixed integer nonlinear optimization

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Chiang, Hsiao-Dong

    Many applications of smart grid can be formulated as constrained optimization problems. Because of the discrete controls involved in power systems, these problems are essentially mixed-integer nonlinear programs. In this paper, we review the Trust-Tech-based methodology for solving mixed-integer nonlinear optimization. Specifically, we have developed a two-stage Trust-Tech-based methodology to systematically compute all the local optimal solutions for constrained mixed-integer nonlinear programming (MINLP) problems. In the first stage, for a given MINLP problem this methodology starts with the construction of a new, continuous, unconstrained problem through relaxation and the penalty function method. A corresponding dynamical system is then constructed to search for a set of local optimal solutions for the unconstrained problem. In the second stage, a reduced constrained NLP is defined for each local optimal solution by determining and fixing the values of integral variables of the MINLP problem. The Trust-Tech-based method is used to compute a set of local optimal solutions for these reduced NLP problems, from which the optimal solution of the original MINLP problem is determined. A numerical simulation of several testing problems is provided to illustrate the effectiveness of our proposed method.

  9. A new approach to the Pontryagin maximum principle for nonlinear fractional optimal control problems

    NASA Astrophysics Data System (ADS)

    Ali, Hegagi M.; Pereira, Fernando Lobo; Gama, Sílvio M. A.

    2016-09-01

    In this paper, we discuss a new general formulation of fractional optimal control problems whose performance index is in the fractional integral form and the dynamics are given by a set of fractional differential equations in the Caputo sense. We use a new approach to prove necessary conditions of optimality in the form of Pontryagin maximum principle for fractional nonlinear optimal control problems. Moreover, a new method based on a generalization of the Mittag-Leffler function is used to solving this class of fractional optimal control problems. A simple example is provided to illustrate the effectiveness of our main result.

  10. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  11. INTERNATIONAL WORKSHOP ON LARGE-SCALE REFORESTATION: PROCEEDINGS

    EPA Science Inventory

    The purpose of the workshop was to identify major operational and ecological considerations needed to successfully conduct large-scale reforestation projects throughout the forested regions of the world. Large-scale" for this workshop means projects where, by human effort, approx...

  12. Using Large-Scale Assessment Scores to Determine Student Grades

    ERIC Educational Resources Information Center

    Miller, Tess

    2013-01-01

    Many Canadian provinces provide guidelines for teachers to determine students' final grades by combining a percentage of students' scores from provincial large-scale assessments with their term scores. This practice is thought to hold students accountable by motivating them to put effort into completing the large-scale assessment, thereby…

  13. Subdifferential of Optimal Value Functions in Nonlinear Infinite Programming

    SciTech Connect

    Huy, N. Q. Giang, N. D.; Yao, J.-C.

    2012-02-15

    This paper presents an exact formula for computing the normal cones of the constraint set mapping including the Clarke normal cone and the Mordukhovich normal cone in infinite programming under the extended Mangasarian-Fromovitz constraint qualification condition. Then, we derive an upper estimate as well as an exact formula for the limiting subdifferential of the marginal/optimal value function in a general Banach space setting.

  14. Nonlinear Resonant Oscillations of Gas in Optimized Acoustical Resonators and the Effect of Central Blockage

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Finkbeiner, Joshua; Raman, Ganesh; Daniels, Christopher; Steinetz, Bruce M.

    2003-01-01

    Optimizing resonator shapes for maximizing the ratio of maximum to minimum gas pressure at an end of the resonator is investigated numerically. It is well known that the resonant frequencies and the nonlinear standing waveform in an acoustical resonator strongly depend on the resonator geometry. A quasi-Newton type scheme was used to find optimized axisymmetric resonator shapes achieving the maximum pressure compression ratio with an acceleration of constant amplitude. The acoustical field was solved using a one-dimensional model, and the resonance frequency shift and hysteresis effects were obtained through an automation scheme based on continuation method. Results are presented for optimizing three types of geometry: a cone, a horn-cone and a half cosine-shape. For each type, different optimized shapes were found when starting with different initial guesses. Further, the one-dimensional model was modified to study the effect of an axisymmetric central blockage on the nonlinear standing wave.

  15. Development of a turbomachinery design optimization procedure using a multiple-parameter nonlinear perturbation method

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.

    1984-01-01

    An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.

  16. Stochastic optimal control of partially observable nonlinear quasi-integrable Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Feng, Ju; Zhu, Weiqiu; Ying, Zuguang

    2010-01-01

    The stochastic optimal control of partially observable nonlinear quasi-integrable Hamiltonian systems is investigated. First, the stochastic optimal control problem of a partially observable nonlinear quasi-integrable Hamiltonian system is converted into that of a completely observable linear system based on a theorem due to Charalambous and Elliot. Then, the converted stochastic optimal control problem is solved by applying the stochastic averaging method and the stochastic dynamical programming principle. The response of the controlled quasi Hamiltonian system is predicted by solving the averaged Fokker-Planck-Kolmogorov equation and the Riccati equation for the estimated error of system states. As an example to illustrate the procedure and effectiveness of the proposed method, the stochastic optimal control problem of a partially observable two-degree-of-freedom quasi-integrable Hamiltonian system is worked out in detail.

  17. Nonlinear Resonant Oscillations of Gas in Optimized Acoustical Resonators and the Effect of Central Blockage

    NASA Technical Reports Server (NTRS)

    Li, Xiao-Fan; Finkbeiner, Joshua; Raman, Ganesh; Daniels, Christopher; Steinetz, Bruce M.

    2003-01-01

    Optimizing resonator shapes for maximizing the ratio of maximum to minimum gas pressure at an end of the resonator is investigated numerically. It is well known that the resonant frequencies and the nonlinear standing waveform in an acoustical resonator strongly depend on the resonator geometry. A quasi-Newton type scheme was used to find optimized axisymmetric resonator shapes achieving the maximum pressure compression ratio with an acceleration of constant amplitude. The acoustical field was solved using a one-dimensional model, and the resonance frequency shift and hysteresis effects were obtained through an automation scheme based on continuation method. Results are presented for optimizing three types of geometry: a cone, a horn-cone and a half cosine- shape. For each type, different optimized shapes were found when starting with different initial guesses. Further, the one-dimensional model was modified to study the effect of an axisymmetric central blockage on the nonlinear standing wave.

  18. Effects of Design Properties on Parameter Estimation in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    The selection of an appropriate booklet design is an important element of large-scale assessments of student achievement. Two design properties that are typically optimized are the "balance" with respect to the positions the items are presented and with respect to the mutual occurrence of pairs of items in the same booklet. The purpose…

  19. [A nonlinear multi-compartment lung model for optimization of breathing airflow pattern].

    PubMed

    Cai, Yongming; Gu, Lingyan; Chen, Fuhua

    2015-02-01

    It is difficult to select the appropriate ventilation mode in clinical mechanical ventilation. This paper presents a nonlinear multi-compartment lung model to solve the difficulty. The purpose is to optimize respiratory airflow patterns and get the minimum of the work of inspiratory phrase and lung volume acceleration, minimum of the elastic potential energy and rapidity of airflow rate changes of expiratory phrase. Sigmoidal function is used to smooth the respiratory function of nonlinear equations. The equations are established to solve nonlinear boundary conditions BVP, and finally the problem was solved with gradient descent method. Experimental results showed that lung volume and the rate of airflow after optimization had good sensitivity and convergence speed. The results provide a theoretical basis for the development of multivariable controller monitoring critically ill mechanically ventilated patients. PMID:25997262

  20. A nonlinearity interval mapping scheme for efficient waste load allocation simulation-optimization analysis

    NASA Astrophysics Data System (ADS)

    Zou, Rui; Liu, Yong; Riverson, John; Parker, Andrew; Carter, Stephen

    2010-08-01

    Applications using simulation-optimization approaches are often limited in practice because of the high computational cost associated with executing the simulation-optimization analysis. This research proposes a nonlinearity interval mapping scheme (NIMS) to overcome the computational barrier of applying the simulation-optimization approach for a waste load allocation analysis. Unlike the traditional response surface methods that use response surface functions to approximate the functional form of the original simulation model, the NIMS approach involves mapping the nonlinear input-output response relationship of a simulation model into an interval matrix, thereby converting the original simulation-optimization model into an interval linear programming model. By using the risk explicit interval linear programming algorithm and an inverse mapping scheme to implicitly resolve nonlinearity in the interval linear programming model, the NIMS approach efficiently obtained near-optimal solutions of the original simulation-optimization problem. The NIMS approach was applied to a case study on Wissahickon Creek in Pennsylvania, with the objective of finding optimal carbonaceous biological oxygen demand and ammonia (NH4) point source waste load allocations, subject to daily average and minimum dissolved oxygen compliance constraints at multiple points along the stream. First, a simulation-optimization model was formulated for this case study. Next, a genetic algorithm was used to solve the problem to produce reference optimal solutions. Finally, the simulation-optimization model was solved using the proposed NIMS, and the obtained solutions were compared with the reference solutions to demonstrate the superior computational efficiency and solution quality of the NIMS.

  1. Non-linear modelling and optimal control of a hydraulically actuated seismic isolator test rig

    NASA Astrophysics Data System (ADS)

    Pagano, Stefano; Russo, Riccardo; Strano, Salvatore; Terzo, Mario

    2013-02-01

    This paper investigates the modelling, parameter identification and control of an unidirectional hydraulically actuated seismic isolator test rig. The plant is characterized by non-linearities such as the valve dead zone and frictions. A non-linear model is derived and then employed for parameter identification. The results concerning the model validation are illustrated and they fully confirm the effectiveness of the proposed model. The testing procedure of the isolation systems is based on the definition of a target displacement time history of the sliding table and, consequently, the precision of the table positioning is of primary importance. In order to minimize the test rig tracking error, a suitable control system has to be adopted. The system non-linearities highly limit the performances of the classical linear control and a non-linear one is therefore adopted. The test rig mathematical model is employed for a non-linear control design that minimizes the error between the target table position and the current one. The controller synthesis is made by taking no specimen into account. The proposed approach consists of a non-linear optimal control based on the state-dependent Riccati equation (SDRE). Numerical simulations have been performed in order to evaluate the soundness of the designed control with and without the specimen under test. The results confirm that the performances of the proposed non-linear controller are not invalidated because of the presence of the specimen.

  2. Nonlinear stability in reaction-diffusion systems via optimal Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Lombardo, S.; Mulone, G.; Trovato, M.

    2008-06-01

    We define optimal Lyapunov functions to study nonlinear stability of constant solutions to reaction-diffusion systems. A computable and finite radius of attraction for the initial data is obtained. Applications are given to the well-known Brusselator model and a three-species model for the spatial spread of rabies among foxes.

  3. Application of multi-objective nonlinear optimization technique for coordinated ramp-metering

    SciTech Connect

    Haj Salem, Habib; Farhi, Nadir; Lebacque, Jean Patrick E-mail: nadir.frahi@ifsttar.fr

    2015-03-10

    This paper aims at developing a multi-objective nonlinear optimization algorithm applied to coordinated motorway ramp metering. The multi-objective function includes two components: traffic and safety. Off-line simulation studies were performed on A4 France Motorway including 4 on-ramps.

  4. Neural network based adaptive control of nonlinear plants using random search optimization algorithms

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Wang, Shyh J.

    1992-01-01

    This paper presents a method for utilizing artificial neural networks for direct adaptive control of dynamic systems with poorly known dynamics. The neural network weights (controller gains) are adapted in real time using state measurements and a random search optimization algorithm. The results are demonstrated via simulation using two highly nonlinear systems.

  5. A hybrid symbolic/finite-element algorithm for solving nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1991-01-01

    The general code described is capable of solving difficult nonlinear optimal control problems by using finite elements and a symbolic manipulator. Quick and accurate solutions are obtained with a minimum for user interaction. Since no user programming is required for most problems, there are tremendous savings to be gained in terms of time and money.

  6. Large scale radio/X-ray jets in microquasars

    NASA Astrophysics Data System (ADS)

    Corbel, Stephane; Tzioumis, Anastasios; Fender, Rob; Kaaret, Philip; Orosz, Jerry; Tomsick, John; Loh, Alan

    2014-10-01

    The discovery with ATCA of large scale radio lobes around the microquasar XTE J1550-564 has led to the discovery with Chandra (for the first time) of moving relativistic X-ray jets in a galactic accreting source. The lobes are likely due to the interaction of relativistic plasma with the ISM. This ATCA proposal has allowed similar discovery in H 1743-322, and therefore that it maybe a common occurrence in the Galaxy. Recently, we have witnessed with ATCA the formation of similar lobes in the black hole GX 339-4. We propose to use the Compact Array to continue our search for radio lobes in microquasars that have been active in the past years. The proposed observations are optimized to discover and study (flux evolution, morphology, SED, proper motion, ...) new radio lobes from microquasars. This will have implications not only for the study of jets from Galactic X-ray binaries, but also for our understanding of relativistic jets from active galactic nuclei (AGN).

  7. Scalable NIC-based reduction on large-scale clusters

    SciTech Connect

    Moody, A.; Fernández, J. C.; Petrini, F.; Panda, Dhabaleswar K.

    2003-01-01

    Many parallel algorithms require effiaent support for reduction mllectives. Over the years, researchers have developed optimal reduction algonduns by taking inm account system size, dam size, and complexities of reduction operations. However, all of these algorithm have assumed the faa that the reduction precessing takes place on the host CPU. Modem Network Interface Cards (NICs) sport programmable processors with substantial memory and thus introduce a fresh variable into the equation This raises the following intersting challenge: Can we take advantage of modern NICs to implementJost redudion operations? In this paper, we take on this challenge in the context of large-scale clusters. Through experiments on the 960-node, 1920-processor or ASCI Linux Cluster (ALC) located at the Lawrence Livermore National Laboratory, we show that NIC-based reductions indeed perform with reduced latency and immed consistency over host-based aleorithms for the wmmon case and that these benefits scale as the system grows. In the largest configuration tested--1812 processors-- our NIC-based algorithm can sum a single element vector in 73 ps with 32-bi integers and in 118 with Mbit floating-point numnbers. These results represent an improvement, respeaively, of 121% and 39% with resvect w the {approx}roductionle vel MPI library

  8. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  9. Soft-Pion theorems for large scale structure

    SciTech Connect

    Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lhui@astro.columbia.edu

    2014-09-01

    Consistency relations — which relate an N-point function to a squeezed (N+1)-point function — are useful in large scale structure (LSS) because of their non-perturbative nature: they hold even if the N-point function is deep in the nonlinear regime, and even if they involve astrophysically messy galaxy observables. The non-perturbative nature of the consistency relations is guaranteed by the fact that they are symmetry statements, in which the velocity plays the role of the soft pion. In this paper, we address two issues: (1) how to derive the relations systematically using the residual coordinate freedom in the Newtonian gauge, and relate them to known results in ζ-gauge (often used in studies of inflation); (2) under what conditions the consistency relations are violated. In the non-relativistic limit, our derivation reproduces the Newtonian consistency relation discovered by Kehagias and Riotto and Peloso and Pietroni. More generally, there is an infinite set of consistency relations, as is known in ζ-gauge. There is a one-to-one correspondence between symmetries in the two gauges; in particular, the Newtonian consistency relation follows from the dilation and special conformal symmetries in ζ-gauge. We probe the robustness of the consistency relations by studying models of galaxy dynamics and biasing. We give a systematic list of conditions under which the consistency relations are violated; violations occur if the galaxy bias is non-local in an infrared divergent way. We emphasize the relevance of the adiabatic mode condition, as distinct from symmetry considerations. As a by-product of our investigation, we discuss a simple fluid Lagrangian for LSS.

  10. Fracture-induced softening for large-scale ice dynamics

    NASA Astrophysics Data System (ADS)

    Albrecht, T.; Levermann, A.

    2014-04-01

    Floating ice shelves can exert a retentive and hence stabilizing force onto the inland ice sheet of Antarctica. However, this effect has been observed to diminish by the dynamic effects of fracture processes within the protective ice shelves, leading to accelerated ice flow and hence to a sea-level contribution. In order to account for the macroscopic effect of fracture processes on large-scale viscous ice dynamics (i.e., ice-shelf scale) we apply a continuum representation of fractures and related fracture growth into the prognostic Parallel Ice Sheet Model (PISM) and compare the results to observations. To this end we introduce a higher order accuracy advection scheme for the transport of the two-dimensional fracture density across the regular computational grid. Dynamic coupling of fractures and ice flow is attained by a reduction of effective ice viscosity proportional to the inferred fracture density. This formulation implies the possibility of non-linear threshold behavior due to self-amplified fracturing in shear regions triggered by small variations in the fracture-initiation threshold. As a result of prognostic flow simulations, sharp across-flow velocity gradients appear in fracture-weakened regions. These modeled gradients compare well in magnitude and location with those in observed flow patterns. This model framework is in principle expandable to grounded ice streams and provides simple means of investigating climate-induced effects on fracturing (e.g., hydro fracturing) and hence on the ice flow. It further constitutes a physically sound basis for an enhanced fracture-based calving parameterization.

  11. Soft-Pion theorems for large scale structure

    NASA Astrophysics Data System (ADS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2014-09-01

    Consistency relations — which relate an N-point function to a squeezed (N+1)-point function — are useful in large scale structure (LSS) because of their non-perturbative nature: they hold even if the N-point function is deep in the nonlinear regime, and even if they involve astrophysically messy galaxy observables. The non-perturbative nature of the consistency relations is guaranteed by the fact that they are symmetry statements, in which the velocity plays the role of the soft pion. In this paper, we address two issues: (1) how to derive the relations systematically using the residual coordinate freedom in the Newtonian gauge, and relate them to known results in ζ-gauge (often used in studies of inflation); (2) under what conditions the consistency relations are violated. In the non-relativistic limit, our derivation reproduces the Newtonian consistency relation discovered by Kehagias & Riotto and Peloso & Pietroni. More generally, there is an infinite set of consistency relations, as is known in ζ-gauge. There is a one-to-one correspondence between symmetries in the two gauges; in particular, the Newtonian consistency relation follows from the dilation and special conformal symmetries in ζ-gauge. We probe the robustness of the consistency relations by studying models of galaxy dynamics and biasing. We give a systematic list of conditions under which the consistency relations are violated; violations occur if the galaxy bias is non-local in an infrared divergent way. We emphasize the relevance of the adiabatic mode condition, as distinct from symmetry considerations. As a by-product of our investigation, we discuss a simple fluid Lagrangian for LSS.

  12. Testing gravity using large-scale redshift-space distortions

    NASA Astrophysics Data System (ADS)

    Raccanelli, Alvise; Bertacca, Daniele; Pietrobon, Davide; Schmidt, Fabian; Samushia, Lado; Bartolo, Nicola; Doré, Olivier; Matarrese, Sabino; Percival, Will J.

    2013-11-01

    We use luminous red galaxies from the Sloan Digital Sky Survey (SDSS) II to test the cosmological structure growth in two alternatives to the standard Λ cold dark matter (ΛCDM)+general relativity (GR) cosmological model. We compare observed three-dimensional clustering in SDSS Data Release 7 (DR7) with theoretical predictions for the standard vanilla ΛCDM+GR model, unified dark matter (UDM) cosmologies and the normal branch Dvali-Gabadadze-Porrati (nDGP). In computing the expected correlations in UDM cosmologies, we derive a parametrized formula for the growth factor in these models. For our analysis we apply the methodology tested in Raccanelli et al. and use the measurements of Samushia et al. that account for survey geometry, non-linear and wide-angle effects and the distribution of pair orientation. We show that the estimate of the growth rate is potentially degenerate with wide-angle effects, meaning that extremely accurate measurements of the growth rate on large scales will need to take such effects into account. We use measurements of the zeroth and second-order moments of the correlation function from SDSS DR7 data and the Large Suite of Dark Matter Simulations (LasDamas), and perform a likelihood analysis to constrain the parameters of the models. Using information on the clustering up to rmax = 120 h-1 Mpc, and after marginalizing over the bias, we find, for UDM models, a speed of sound c∞ ≤ 6.1e-4, and, for the nDGP model, a cross-over scale rc ≥ 340 Mpc, at 95 per cent confidence level.

  13. Pareto optimal calibration of highly nonlinear reactive transport groundwater models using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Siade, A. J.; Prommer, H.; Welter, D.

    2014-12-01

    Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site

  14. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  15. State of the Art in Large-Scale Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; Zreda, Marek G.

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  16. Method for nonlinear optimization for gas tagging and other systems

    DOEpatents

    Chen, Ting; Gross, Kenny C.; Wegerich, Stephan

    1998-01-01

    A method and system for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established.

  17. Method for nonlinear optimization for gas tagging and other systems

    DOEpatents

    Chen, T.; Gross, K.C.; Wegerich, S.

    1998-01-06

    A method and system are disclosed for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established. 6 figs.

  18. Links between small-scale dynamics and large-scale averages and its implication to large-scale hydrology

    NASA Astrophysics Data System (ADS)

    Gong, L.

    2012-04-01

    pixels that could be used to represent the temporal dynamic of a large spatial domain. The derived points or pixels allow a decomposition of the average climate dynamic to a number of patterns of the internal variations and change signals. The coupling of sub-sets of climate input to a set of hydrological response units maintains the non-linear nature of the hydrological system. The possibility that the behavior of a large river basin could be studied from a small sub-set of the basin area, indicates that model setup, calibration and evaluation are not necessarily tied with downstream gauges. Instead, local observations could be used to setup and evaluate large-scale models. This work could potentially open up possibilities for better setting up and evaluate large-scale hydrological models, and study the climate-hydrology interaction with limited data. In the same time, the fact that multiple sets of points or pixels could equally well represent the dynamic of a large domain agreed with the equifinality theory: there exist multiple realisms of different climate-hydrology setttings that could lead to same average behavior. The difference among the multiple sets represents the inherent heterogeneity of the domain. This could indicate new ways to bracket uncertainty for current and future hydrological simulations.

  19. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  20. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    NASA Astrophysics Data System (ADS)

    Huang, Xiaobiao; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  1. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness. PMID:25794375

  2. Development and optimization of a nonlinear multiparameter model for the human operator

    NASA Technical Reports Server (NTRS)

    Johannsen, G.

    1972-01-01

    A systematic method is proposed for the development, optimization, and comparison of controller-models for the human operator. This is suitable for any designed model, even multiparameter systems. A random search technique is chosen for the parameter optimization. As valuation criteria for the quality of the model development the criterion function - the comparison between the input and output functions of the human operator and those of the model - and the most important characteristic values and functions of the statistical signal theory are used. A nonlinear multiparameter model for the human operator is being designed which considers the complex input information rate per time in a single display. The nonlinear features of the model are effected by a modified threshold element and a decision algorithm. Different display-configurations as well as various transfer functions of the controlled element are explained by different optimized parameter-combinations.

  3. Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints

    NASA Astrophysics Data System (ADS)

    Yang, Xiong; Liu, Derong; Wang, Ding

    2014-03-01

    In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.

  4. Neural-network-observer-based optimal control for unknown nonlinear systems using adaptive dynamic programming

    NASA Astrophysics Data System (ADS)

    Liu, Derong; Huang, Yuzhu; Wang, Ding; Wei, Qinglai

    2013-09-01

    In this paper, an observer-based optimal control scheme is developed for unknown nonlinear systems using adaptive dynamic programming (ADP) algorithm. First, a neural-network (NN) observer is designed to estimate system states. Then, based on the observed states, a neuro-controller is constructed via ADP method to obtain the optimal control. In this design, two NN structures are used: a three-layer NN is used to construct the observer which can be applied to systems with higher degrees of nonlinearity and without a priori knowledge of system dynamics, and a critic NN is employed to approximate the value function. The optimal control law is computed using the critic NN and the observer NN. Uniform ultimate boundedness of the closed-loop system is guaranteed. The actor, critic, and observer structures are all implemented in real-time, continuously and simultaneously. Finally, simulation results are presented to demonstrate the effectiveness of the proposed control scheme.

  5. Nonlinear dynamical systems of fed-batch fermentation and their optimal control

    NASA Astrophysics Data System (ADS)

    Liu, Chongyang; Gong, Zhaohua; Feng, Enmin; Yin, Hongchao

    2012-05-01

    In this article, we propose a controlled nonlinear dynamical system with variable switching instants, in which the feeding rate of glycerol is regarded as the control function and the moments between the batch and feeding processes as switching instants, to formulate the fed-batch fermentation of glycerol bioconversion to 1,3-propanediol (1,3-PD). Some important properties of the proposed system and its solution are then discussed. Taking the concentration of 1,3-PD at the terminal time as the cost functional, we establish an optimal control model involving the controlled nonlinear dynamical system and subject to continuous state inequality constraints. The existence of the optimal control is also proved. A computational approach is constructed on the basis of constraint transcription and smoothing approximation techniques. Numerical results show that, by employing the optimal control strategy, the concentration of 1,3-PD at the terminal time can be increased considerably.

  6. Evolution of optimal Hill coefficients in nonlinear public goods games.

    PubMed

    Archetti, Marco; Scheuring, István

    2016-10-01

    In evolutionary game theory, the effect of public goods like diffusible molecules has been modelled using linear, concave, sigmoid and step functions. The observation that biological systems are often sigmoid input-output functions, as described by the Hill equation, suggests that a sigmoid function is more realistic. The Michaelis-Menten model of enzyme kinetics, however, predicts a concave function, and while mechanistic explanations of sigmoid kinetics exist, we lack an adaptive explanation: what is the evolutionary advantage of a sigmoid benefit function? We analyse public goods games in which the shape of the benefit function can evolve, in order to determine the optimal and evolutionarily stable Hill coefficients. We find that, while the dynamics depends on whether output is controlled at the level of the individual or the population, intermediate or high Hill coefficients often evolve, leading to sigmoid input-output functions that for some parameters are so steep to resemble a step function (an on-off switch). Our results suggest that, even when the shape of the benefit function is unknown, biological public goods should be modelled using a sigmoid or step function rather than a linear or concave function. PMID:27343626

  7. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  8. Solution algorithms for non-linear singularly perturbed optimal control problems

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.

    1983-01-01

    The applicability and usefulness of several classical and other methods for solving the two-point boundary-value problem which arises in non-linear singularly perturbed optimal control are assessed. Specific algorithms of the Picard, Newton and averaging types are formally developed for this class of problem. The computational requirements associated with each algorithm are analysed and compared with the computational requirement of the method of matched asymptotic expansions. Approximate solutions to a linear and a non-linear problem are obtained by each method and compared.

  9. Comparing Linear and Nonlinear Methods for More Reliable Predictive Uncertainty Quantification and Optimal Design of Experiments

    NASA Astrophysics Data System (ADS)

    Wöhling, T.; Geiges, A.; Gosses, M.; Nowak, W.

    2014-12-01

    Data acquisition in complex environmental systems is typically expensive. Therefore, experimental designs should be optimized such that most can be learned about the system at least costs. In the past, optimal design (OD) analyses were mainly restricted to linear or linearized problems and methods. Nonlinear OD methods offer more efficient data collection strategies, because they can better handle the non-linearity exhibited by most coupled environmental systems. However, the much higher computational demand restricts their applicability to models with comparatively low run-times. Our goal is to compare the trade-off between computational efficiency and the obtainable design quality between linear and nonlinear OD methods. In our study, a steady-state model for a section of the river Steinlach (South Germany) was set up and calibrated to measured groundwater head data and on estimated groundwater exchange fluxes. The model involves a Pilot Point parameterization scheme for hydraulic conductivity and six zones with uncertain river bed conductivities. In the linear OD approach, the initial predictive uncertainty of groundwater exchange fluxes and mean travel times are estimated using the PREDUNC utility (Moore and Doherty 2005) of PEST. The parameter calibration was performed with a non-linear global search. A discrete global search method and PREDUNC was then utilized to identify augmented monitoring strategies (additional n measurement locations and data types) that reduce the predictive uncertainty the most. For the nonlinear assessment, a conditional ensemble obtained with Markov-chain Monte Carlo represents the initial state of uncertainty and is used as input to a nonlinear OD framework called PreDIA (Leube et al. 2012). PreDIA can consider any kind of uncertainties and non-linear (statistical) dependencies in data, models, parameters and system drivers during the OD process. The linear and non-linear approaches are compared thoroughly during each step of the

  10. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  11. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A.; Gaztanaga, E.

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard {Omega} = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R{sub p} {approximately}20 h{sup {minus}1} Mpc, e.g., low-matter-density (non-zero cosmological constant) models, {open_quote}tilted{close_quote} primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q{sub J} at large scales, r {approx_gt} R{sub p}. Current observational constraints on the three-point amplitudes Q{sub 3} and S{sub 3} can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  12. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A. ); Gaztanaga, E. )

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard [Omega] = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R[sub p] [approximately]20 h[sup [minus]1] Mpc, e.g., low-matter-density (non-zero cosmological constant) models, [open quote]tilted[close quote] primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q[sub J] at large scales, r [approx gt] R[sub p]. Current observational constraints on the three-point amplitudes Q[sub 3] and S[sub 3] can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  13. Large scale suppression of scalar power on a spatial condensation

    NASA Astrophysics Data System (ADS)

    Kouwn, Seyen; Kwon, O.-Kab; Oh, Phillial

    2015-03-01

    We consider a deformed single-field inflation model in terms of three SO(3) symmetric moduli fields. We find that spatially linear solutions for the moduli fields induce a phase transition during the early stage of the inflation and the suppression of scalar power spectrum at large scales. This suppression can be an origin of anomalies for large-scale perturbation modes in the cosmological observation.

  14. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  15. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems.

    PubMed

    Vrabie, Draguna; Lewis, Frank

    2009-04-01

    In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided. PMID:19362449

  16. Incorporation of Fixed Installation Costs into Optimization of Groundwater Remediation with a New Efficient Surrogate Nonlinear Mixed Integer Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Shoemaker, Christine; Wan, Ying

    2016-04-01

    Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).

  17. LARGE-SCALE STRUCTURE OF THE UNIVERSE AS A COSMIC STANDARD RULER

    SciTech Connect

    Park, Changbom; Kim, Young-Rae

    2010-06-01

    We propose to use the large-scale structure (LSS) of the universe as a cosmic standard ruler. This is possible because the pattern of large-scale distribution of matter is scale-dependent and does not change in comoving space during the linear-regime evolution of structure. By examining the pattern of LSS in several redshift intervals it is possible to reconstruct the expansion history of the universe, and thus to measure the cosmological parameters governing the expansion of the universe. The features of the large-scale matter distribution that can be used as standard rulers include the topology of LSS and the overall shapes of the power spectrum and correlation function. The genus, being an intrinsic topology measure, is insensitive to systematic effects such as the nonlinear gravitational evolution, galaxy biasing, and redshift-space distortion, and thus is an ideal cosmic ruler when galaxies in redshift space are used to trace the initial matter distribution. The genus remains unchanged as far as the rank order of density is conserved, which is true for linear and weakly nonlinear gravitational evolution, monotonic galaxy biasing, and mild redshift-space distortions. The expansion history of the universe can be constrained by comparing the theoretically predicted genus corresponding to an adopted set of cosmological parameters with the observed genus measured by using the redshift-comoving distance relation of the same cosmological model.

  18. Collocation with nonlinear programming for two-sided flight path optimization

    NASA Astrophysics Data System (ADS)

    Horie, Kazuhiro

    This research successfully develops a new numerical method for the problem of two-sided flight path optimization, that is, a method capable of finding trajectories satisfying the necessary condition of an open-loop representation of a saddle-point trajectory. The method of direct collocation with nonlinear programming is extended to find the solution of a zerosum two-person differential game by incorporating the analytical optimality condition for one player into the system equations. The new method is named semi-direct collocation with nonlinear programming (semi-DCNLP). We apply the new method to a variety of problems of increasing complexity; the dolichobrachistochrone, a problem of ballistic interception, the homicidal chauffeur problem and minimum-time spacecraft interception for optimally evasive target, and thus verify that the method is capable of identifying saddle-point trajectories. While the method is quite robust, ambitious problems require a reasonable initial guess of the discretized solution from which the optimizer may converge. A method for generating a good initial guess, requiring no a priori information about the solution, is developed using genetic algorithms. The semi-DCNLP, in combination with the genetic algorithm-based preprocessor, is then used to solve a very complicated pursuit-evasion problem; optimal air combat for realistic fighter aircraft models in three dimensions. Characteristics of the optimal air combat maneuvers for both aircraft are identified for many different initial conditions.

  19. Coefficient of performance under optimized figure of merit in minimally nonlinear irreversible refrigerator

    NASA Astrophysics Data System (ADS)

    Izumida, Y.; Okuda, K.; Calvo Hernández, A.; Roco, J. M. M.

    2013-01-01

    We apply the model of minimally nonlinear irreversible heat engines developed by Izumida and Okuda (EPL, 97 (2012) 10004) to refrigerators. The model assumes extended Onsager relations including a new nonlinear term accounting for dissipation effects. The bounds for the optimized regime under an appropriate figure of merit and the tight-coupling condition are analyzed and successfully compared with those obtained previously for low-dissipation Carnot refrigerators in the finite-time thermodynamics framework. Besides, we study the bounds for the nontight-coupling case numerically. We also introduce a leaky low-dissipation Carnot refrigerator and show that it serves as an example of the minimally nonlinear irreversible refrigerator, by calculating its Onsager coefficients explicitly.

  20. Survey Design for Large-Scale, Unstructured Resistivity Surveys

    NASA Astrophysics Data System (ADS)

    Labrecque, D. J.; Casale, D.

    2009-12-01

    In this paper, we discuss the issues in designing data collection strategies for large-scale, poorly structured resistivity surveys. Existing or proposed applications for these types of surveys include carbon sequestration, enhanced oil recovery monitoring, monitoring of leachate from working or abandoned mines, and mineral surveys. Electrode locations are generally chosen by land access, utilities, roads, existing wells etc. Classical arrays such as the Wenner array or dipole-dipole arrays are not applicable if the electrodes cannot be placed in quasi-regular lines or grids. A new, far more generalized strategy is needed for building data collection schemes. Following the approach of earlier two-dimensional (2-D) survey designs, the proposed method begins by defining a base array. In (2-D) design, this base array is often a standard dipole-dipole array. For unstructured three-dimensional (3-D) design, determining this base array is a multi-step process. The first step is to determine a set of base dipoles with similar characteristics. For example, the base dipoles may consist of electrode pairs trending within 30 degrees of north and with a length between 100 and 250 m in length. These dipoles are then combined into a trial set of arrays. This trial set of arrays is reduced by applying a series of filters based on criteria such as separation between the dipoles. Using the base array set, additional arrays are added and tested to determine the overall improvement in resolution and to determine an optimal set of arrays. Examples of the design process are shown for a proposed carbon sequestration monitoring system.

  1. An optimal approach to active damping of nonlinear vibrations in composite plates using piezoelectric patches

    NASA Astrophysics Data System (ADS)

    Saviz, M. R.

    2015-11-01

    In this paper a nonlinear approach to studying the vibration characteristic of laminated composite plate with surface-bonded piezoelectric layer/patch is formulated, based on the Green Lagrange type of strain-displacements relations, by incorporating higher-order terms arising from nonlinear relations of kinematics into mathematical formulations. The equations of motion are obtained through the energy method, based on Lagrange equations and by using higher-order shear deformation theories with von Karman-type nonlinearities, so that transverse shear strains vanish at the top and bottom surfaces of the plate. An isoparametric finite element model is provided to model the nonlinear dynamics of the smart plate with piezoelectric layer/ patch. Different boundary conditions are investigated. Optimal locations of piezoelectric patches are found using a genetic algorithm to maximize spatial controllability/observability and considering the effect of residual modes to reduce spillover effect. Active attenuation of vibration of laminated composite plate is achieved through an optimal control law with inequality constraint, which is related to the maximum and minimum values of allowable voltage in the piezoelectric elements. To keep the voltages of actuator pairs in an allowable limit, the Pontryagin’s minimum principle is implemented in a system with multi-inequality constraint of control inputs. The results are compared with similar ones, proving the accuracy of the model especially for the structures undergoing large deformations. The convergence is studied and nonlinear frequencies are obtained for different thickness ratios. The structural coupling between plate and piezoelectric actuators is analyzed. Some examples with new features are presented, indicating that the piezo-patches significantly improve the damping characteristics of the plate for suppressing the geometrically nonlinear transient vibrations.

  2. The large-scale landslide risk classification in catchment scale

    NASA Astrophysics Data System (ADS)

    Liu, Che-Hsin; Wu, Tingyeh; Chen, Lien-Kuang; Lin, Sheng-Chi

    2013-04-01

    The landslide disasters caused heavy casualties during Typhoon Morakot, 2009. This disaster is defined as largescale landslide due to the casualty numbers. This event also reflects the survey on large-scale landslide potential is so far insufficient and significant. The large-scale landslide potential analysis provides information about where should be focused on even though it is very difficult to distinguish. Accordingly, the authors intend to investigate the methods used by different countries, such as Hong Kong, Italy, Japan and Switzerland to clarify the assessment methodology. The objects include the place with susceptibility of rock slide and dip slope and the major landslide areas defined from historical records. Three different levels of scales are confirmed necessarily from country to slopeland, which are basin, catchment, and slope scales. Totally ten spots were classified with high large-scale landslide potential in the basin scale. The authors therefore focused on the catchment scale and employ risk matrix to classify the potential in this paper. The protected objects and large-scale landslide susceptibility ratio are two main indexes to classify the large-scale landslide risk. The protected objects are the constructions and transportation facilities. The large-scale landslide susceptibility ratio is based on the data of major landslide area and dip slope and rock slide areas. Totally 1,040 catchments are concerned and are classified into three levels, which are high, medium, and low levels. The proportions of high, medium, and low levels are 11%, 51%, and 38%, individually. This result represents the catchments with high proportion of protected objects or large-scale landslide susceptibility. The conclusion is made and it be the base material for the slopeland authorities when considering slopeland management and the further investigation.

  3. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    SciTech Connect

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  4. Development of linear and nonlinear hand-arm vibration models using optimization and linearization techniques.

    PubMed

    Rakheja, S; Gurram, R; Gouw, G J

    1993-10-01

    Hand-arm vibration (HAV) models serve as an effective tool to assess the vibration characteristics of the hand-tool system and to evaluate the attenuation performance of vibration isolation mechanisms. This paper describes a methodology to identify the parameters of HAV models, whether linear or nonlinear, using mechanical impedance data and a nonlinear programming based optimization technique. Three- and four-degrees-of-freedom (DOF) linear, piecewise linear and nonlinear HAV models are formulated and analyzed to yield impedance characteristics in the 5-1000 Hz frequency range. A local equivalent linearization algorithm, based upon the principle of energy similarity, is implemented to simulate the nonlinear HAV models. Optimization methods are employed to identify the model parameters, such that the magnitude and phase errors between the computed and measured impedance characteristics are minimum in the entire frequency range. The effectiveness of the proposed method is demonstrated through derivations of models that correlate with the measured X-axis impedance characteristics of the hand-arm system, proposed by ISO. The results of the study show that a linear model cannot predict the impedance characteristics in the entire frequency range, while a piecewise linear model yields an accurate estimation. PMID:8253830

  5. Optimal nonlinear excitation of decadal variability of the North Atlantic thermohaline circulation

    NASA Astrophysics Data System (ADS)

    Zu, Ziqing; Mu, Mu; Dijkstra, Henk A.

    2013-11-01

    Nonlinear development of salinity perturbations in the Atlantic thermohaline circulation (THC) is investigated with a three-dimensional ocean circulation model, using the conditional nonlinear optimal perturbation method. The results show two types of optimal initial perturbations of sea surface salinity, one associated with freshwater and the other with salinity. Both types of perturbations excite decadal variability of the THC. Under the same amplitude of initial perturbation, the decadal variation induced by the freshwater perturbation is much stronger than that by the salinity perturbation, suggesting that the THC is more sensitive to freshwater than salinity perturbation. As the amplitude of initial perturbation increases, the decadal variations become stronger for both perturbations. For salinity perturbations, recovery time of the THC to return to steady state gradually saturates with increasing amplitude, whereas this recovery time increases remarkably for freshwater perturbations. A nonlinear (advective) feedback between density and velocity anomalies is proposed to explain these characteristics of decadal variability excitation. The results are consistent with previous ones from simple box models, and highlight the importance of nonlinear feedback in decadal THC variability.

  6. Generation and saturation of large-scale flows in flute turbulence

    SciTech Connect

    Sandberg, I.; Isliker, H.; Pavlenko, V. P.; Hizanidis, K.; Vlahos, L.

    2005-03-01

    The excitation and suppression of large-scale anisotropic modes during the temporal evolution of a magnetic-curvature-driven electrostatic flute instability are numerically investigated. The formation of streamerlike structures is attributed to the linear development of the instability while the subsequent excitation of the zonal modes is the result of the nonlinear coupling between linearly grown flute modes. When the amplitudes of the zonal modes become of the same order as that of the streamer modes, the flute instabilities get suppressed and poloidal (zonal) flows dominate. In the saturated state that follows, the dominant large-scale modes of the potential and the density are self-organized in different ways, depending on the value of the ion temperature.

  7. The Conversion of Large-Scale Turbulent Energy to Plasma Heat In Astrophysical Plasmas

    NASA Astrophysics Data System (ADS)

    Howes, Gregory

    2015-11-01

    Turbulence in space and astrophysical plasmas plays a key role in the conversion of the energy of violent events and instabilities at large scales into plasma heat. The turbulent cascade transfers this energy from the large scales at which the motions are driven down to small scales, and this essentially fluid process can be understood in terms of nonlinear wave-wave interactions. At sufficiently small scales, for which the dynamics is often weakly collisional, collisionless mechanisms damp the turbulent electromagnetic fluctuations, and this essentially kinetic process can be understood in terms of linear wave-particle interactions. In this talk, I will summarize the possible channels of the turbulent dissipation in a weakly collisional plasma, and present recent results from kinetic numerical simulations of plasma turbulence. Finally, I will discuss strategies for the definitive identification of the dominant dissipation channels using spacecraft measurements of turbulence in the solar wind.

  8. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  9. Nonlinear RANSAC Optimization for Parameter Estimation with Applications to Phagocyte Transmigration.

    PubMed

    Kang, Mingon; Gao, Jean; Tang, Liping

    2011-01-01

    Developing vigorous mathematical equations and estimating accurate parameters within feasible computational time are two indispensable parts to build reliable system models for representing biological properties of the system and for producing reliable simulation. For a complex biological system with limited observations, one of the daunting tasks is the large number of unknown parameters in the mathematical modeling whose values directly determine the performance of computational modeling. To tackle this problem, we have developed a data-driven global optimization method, nonlinear RANSAC, based on RANdom SAmple Consensus (a.k.a. RANSAC) method for parameter estimation of nonlinear system models. Conventional RANSAC method is sound and simple, but it is oriented for linear system models. We not only adopt the strengths of RANSAC, but also extend the method to nonlinear systems with outstanding performance. As a specific application example, we have targeted understanding phagocyte transmigration which is involved in the fibrosis process for biomedical device implantation. With well-defined mathematical nonlinear equations of the system, nonlinear RANSAC is performed for the parameter estimation. In order to evaluate the general performance of the method, we also applied the method to signalling pathways with ordinary differential equations as a general format. PMID:23227455

  10. Genetic algorithms: An evolution from Monte Carlo Methods for strongly non-linear geophysical optimization problems

    NASA Astrophysics Data System (ADS)

    Gallagher, Kerry; Sambridge, Malcolm; Drijkoningen, Guy

    In providing a method for solving non-linear optimization problems Monte Carlo techniques avoid the need for linearization but, in practice, are often prohibitive because of the large number of models that must be considered. A new class of methods known as Genetic Algorithms have recently been devised in the field of Artificial Intelligence. We outline the basic concept of genetic algorithms and discuss three examples. We show that, in locating an optimal model, the new technique is far superior in performance to Monte Carlo techniques in all cases considered. However, Monte Carlo integration is still regarded as an effective method for the subsequent model appraisal.

  11. On Managing the Use of Surrogates in General Nonlinear Optimization and MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.

    1998-01-01

    This paper is concerned with a trust region approximation management framework (AMF) for solving the nonlinear programming problem in general and multidisciplinary optimization problems in particular The intent of the AMF methodology is to facilitate the solution of optimization problems with high-fidelity models. While such models are designed to approximate the physical phenomena they describe to a high degree of accuracy, their use in a repetitive procedure, for example, iterations of an optimization or a search algorithm, make such use prohibitively expensive. An improvement in design with lower-fidelity, cheaper models, however, does not guarantee a corresponding improvement for the higher-fidelity problem. The AMF methodology proposed here is based on a class of multilevel methods for constrained optimization and is designed to manage the use of variable-fidelity approximations or models in a systematic way that assures convergence to critical points of the original high-fidelity problem.

  12. Unsaturated Hydraulic Conductivity for Evaporation in Large scale Heterogeneous Soils

    NASA Astrophysics Data System (ADS)

    Sun, D.; Zhu, J.

    2014-12-01

    In this study we aim to provide some practical guidelines of how the commonly used simple averaging schemes (arithmetic, geometric, or harmonic mean) perform in simulating large scale evaporation in a large scale heterogeneous landscape. Previous studies on hydraulic property upscaling focusing on steady state flux exchanges illustrated that an effective hydraulic property is usually more difficult to define for evaporation. This study focuses on upscaling hydraulic properties of large scale transient evaporation dynamics using the idea of the stream tube approach. Specifically, the two main objectives are: (1) if the three simple averaging schemes (i.e., arithmetic, geometric and harmonic means) of hydraulic parameters are appropriate in representing large scale evaporation processes, and (2) how the applicability of these simple averaging schemes depends on the time scale of evaporation processes in heterogeneous soils. Multiple realizations of local evaporation processes are carried out using HYDRUS-1D computational code (Simunek et al, 1998). The three averaging schemes of soil hydraulic parameters were used to simulate the cumulative flux exchange, which is then compared with the large scale average cumulative flux. The sensitivity of the relative errors to the time frame of evaporation processes is also discussed.

  13. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  14. Analysis and design of robust decentralized controllers for nonlinear systems

    SciTech Connect

    Schoenwald, D.A.

    1993-07-01

    Decentralized control strategies for nonlinear systems are achieved via feedback linearization techniques. New results on optimization and parameter robustness of non-linear systems are also developed. In addition, parametric uncertainty in large-scale systems is handled by sensitivity analysis and optimal control methods in a completely decentralized framework. This idea is applied to alleviate uncertainty in friction parameters for the gimbal joints on Space Station Freedom. As an example of decentralized nonlinear control, singular perturbation methods and distributed vibration damping are merged into a control strategy for a two-link flexible manipulator.

  15. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale. PMID:22356256

  16. Astronomical optical frequency comb generation in nonlinear fibres and ring resonators: optimization studies

    NASA Astrophysics Data System (ADS)

    Chavez Boggio, J. M.; Fremberg, T.; Bodenmüller, D.; Wysmolek, M.; Sanyic, H.; Fernando, H.; Neumann, J.; Kracht, D.; Haynes, R.; Roth, M. M.

    2012-09-01

    We here discuss recent progress on astronomical optical frequency comb generation at innoFSPEC-Potsdam. Two different platforms (and approaches) are numerically and experimentally investigated targeting medium and low resolution spectrographs at astronomical facilities in which innoFSPEC is currently involved. In the first approach, a frequency comb is generated by propagating two lasers through three nonlinear stages - the first two stages serve for the generation of low-noise ultra-short pulses, while the final stage is a low-dispersion highly-nonlinear fibre where the pulses undergo strong spectral broadening. In our approach, the wavelength of one of the lasers can be tuned allowing the comb line spacing being continuously varied during the calibration procedure - this tuning capability is expected to improve the calibration accuracy since the CCD detector response can be fully scanned. The input power, the dispersion, the nonlinear coefficient, and fibre lengths in the nonlinear stages are defined and optimized by solving the Generalized Nonlinear Schrodinger Equation. Experimentally, we generate the 250GHz line-spacing frequency comb using two narrow linewidth lasers that are adiabatically compressed in a standard fibre first and then in a double-clad Er/Yb doped fibre. The spectral broadening finally takes place in a highly nonlinear fibre resulting in an astro-comb with 250 calibration lines (covering a bandwidth of 500 nm) with good spectral equalization. In the second approach, we aim to generate optical frequency combs in dispersion-optimized silicon nitride ring resonators. A technique for lowering and flattening the chromatic dispersion in silicon nitride waveguides with silica cladding is proposed and demonstrated. By minimizing the waveguide dispersion in the resonator two goals are targeted: enhancing the phase matching for non-linear interactions and producing equally spaced resonances. For this purpose, instead of one cladding layer our design

  17. Large-Scale Coronal Heating from "Cool" Activity in the Solar Magnetic Network

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.

    1999-01-01

    In Fe XII images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi-supergranular (large-scale corona). In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. Taken together, the coronal network emission and bright point emission are only about 5% of the entire quiet solar coronal Fe XII emission. Here we investigate the relationship between the large-scale corona and the network as seen in three different EIT filters (He II, Fe IX-X, and Fe XII). Using the median-brightness contour, we divide the large-scale Fe XII corona into dim and bright halves, and find that the bright-half/dim half brightness ratio is about 1.5. We also find that the bright half relative to the dim half has 10 times greater total bright point Fe XII emission, 3 times greater Fe XII network emission, 2 times greater Fe IX-X network emission, 1.3 times greater He II network emission, and has 1.5 times more magnetic flux. Also, the cooler network (He II) radiates an order of magnitude more energy than the hotter coronal network (Fe IX-X, and Fe XII). From these results we infer that: 1) The heating of the network and the heating of the large-scale corona each increase roughly linearly with the underlying magnetic flux. 2) The production of network coronal bright points and heating of the coronal network each increase nonlinearly with the magnetic flux. 3) The heating of the large-scale corona is driven by widespread cooler network activity rather than by the exceptional network activity that produces the network coronal bright points and the coronal network. 4) The large-scale corona is heated by a nonthermal process since the driver of its heating is cooler than it is. This work was funded by the Solar Physics Branch of NASA's office of

  18. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  19. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  20. Coupling between convection and large-scale circulation

    NASA Astrophysics Data System (ADS)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  1. Large-scale current systems in the dayside Venus ionosphere

    NASA Technical Reports Server (NTRS)

    Luhmann, J. G.; Elphic, R. C.; Brace, L. H.

    1981-01-01

    The occasional observation of large-scale horizontal magnetic fields within the dayside ionosphere of Venus by the flux gate magnetometer on the Pioneer Venus orbiter suggests the presence of large-scale current systems. Using the measured altitude profiles of the magnetic field and the electron density and temperature, together with the previously reported neutral atmosphere density and composition, it is found that the local ionosphere can be described at these times by a simple steady state model which treats the unobserved quantities, such as the electric field, as parameters. When the model is appropriate, the altitude profiles of the ion and electron velocities and the currents along the satellite trajectory can be inferred. These results elucidate the configurations and sources of the ionospheric current systems which produce the observed large-scale magnetic fields, and in particular illustrate the effect of ion-neutral coupling in the determination of the current system at low altitudes.

  2. Do Large-Scale Topological Features Correlate with Flare Properties?

    NASA Astrophysics Data System (ADS)

    DeRosa, Marc L.; Barnes, Graham

    2016-05-01

    In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.

  3. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  4. The Evolution of Baryons in Cosmic Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Snedden, Ali; Arielle Phillips, Lara; Mathews, Grant James; Coughlin, Jared; Suh, In-Saeng; Bhattacharya, Aparna

    2015-01-01

    The environments of galaxies play a critical role in their formation and evolution. We study these environments using cosmological simulations with star formation and supernova feedback included. From these simulations, we parse the large scale structure into clusters, filaments and voids using a segmentation algorithm adapted from medical imaging. We trace the star formation history, gas phase and metal evolution of the baryons in the intergalactic medium as function of structure. We find that our algorithm reproduces the baryon fraction in the intracluster medium and that the majority of star formation occurs in cold, dense filaments. We present the consequences this large scale environment has for galactic halos and galaxy evolution.

  5. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  6. Large-scale ER-damper for seismic protection

    NASA Astrophysics Data System (ADS)

    McMahon, Scott; Makris, Nicos

    1997-05-01

    A large scale electrorheological (ER) damper has been designed, constructed, and tested. The damper consists of a main cylinder and a piston rod that pushes an ER-fluid through a number of stationary annular ducts. This damper is a scaled- up version of a prototype ER-damper which has been developed and extensively studied in the past. In this paper, results from comprehensive testing of the large-scale damper are presented, and the proposed theory developed for predicting the damper response is validated.

  7. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  8. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  9. Large-scale liquid scintillation detectors for solar neutrinos

    NASA Astrophysics Data System (ADS)

    Benziger, Jay B.; Calaprice, Frank P.

    2016-04-01

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed.

  10. Contribution of peculiar shear motions to large-scale structure

    NASA Technical Reports Server (NTRS)

    Mueler, Hans-Reinhard; Treumann, Rudolf A.

    1994-01-01

    Self-gravitating shear flow instability simulations in a cold dark matter-dominated expanding Einstein-de Sitter universe have been performed. When the shear flow speed exceeds a certain threshold, self-gravitating Kelvin-Helmoholtz instability occurs, forming density voids and excesses along the shear flow layer which serve as seeds for large-scale structure formation. A possible mechanism for generating shear peculiar motions are velocity fluctuations induced by the density perturbations of the postinflation era. In this scenario, short scales grow earlier than large scales. A model of this kind may contribute to the cellular structure of the luminous mass distribution in the universe.

  11. Clearing and Labeling Techniques for Large-Scale Biological Tissues.

    PubMed

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-06-30

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  12. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.

    PubMed

    Kiumarsi, Bahare; Lewis, Frank L

    2015-01-01

    This paper presents a partially model-free adaptive optimal control solution to the deterministic nonlinear discrete-time (DT) tracking control problem in the presence of input constraints. The tracking error dynamics and reference trajectory dynamics are first combined to form an augmented system. Then, a new discounted performance function based on the augmented system is presented for the optimal nonlinear tracking problem. In contrast to the standard solution, which finds the feedforward and feedback terms of the control input separately, the minimization of the proposed discounted performance function gives both feedback and feedforward parts of the control input simultaneously. This enables us to encode the input constraints into the optimization problem using a nonquadratic performance function. The DT tracking Bellman equation and tracking Hamilton-Jacobi-Bellman (HJB) are derived. An actor-critic-based reinforcement learning algorithm is used to learn the solution to the tracking HJB equation online without requiring knowledge of the system drift dynamics. That is, two neural networks (NNs), namely, actor NN and critic NN, are tuned online and simultaneously to generate the optimal bounded control policy. A simulation example is given to show the effectiveness of the proposed method. PMID:25312944

  13. The Effective Field Theory of Large Scale Structures at two loops

    SciTech Connect

    Carrasco, John Joseph M.; Foreman, Simon; Green, Daniel; Senatore, Leonardo E-mail: sfore@stanford.edu E-mail: senatore@stanford.edu

    2014-07-01

    Large scale structure surveys promise to be the next leading probe of cosmological information. It is therefore crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbation theory for the weakly non-linear regime of dark matter, where correlation functions are computed in an expansion of the wavenumber k of a mode over the wavenumber associated with the non-linear scale k{sub NL}. Since most of the information is contained at high wavenumbers, it is necessary to compute higher order corrections to correlation functions. After the one-loop correction to the matter power spectrum, we estimate that the next leading one is the two-loop contribution, which we compute here. At this order in k/k{sub NL}, there is only one counterterm in the EFTofLSS that must be included, though this term contributes both at tree-level and in several one-loop diagrams. We also discuss correlation functions involving the velocity and momentum fields. We find that the EFTofLSS prediction at two loops matches to percent accuracy the non-linear matter power spectrum at redshift zero up to k∼ 0.6 h Mpc{sup −1}, requiring just one unknown coefficient that needs to be fit to observations. Given that Standard Perturbation Theory stops converging at redshift zero at k∼ 0.1 h Mpc{sup −1}, our results demonstrate the possibility of accessing a factor of order 200 more dark matter quasi-linear modes than naively expected. If the remaining observational challenges to accessing these modes can be addressed with similar success, our results show that there is tremendous potential for large scale structure surveys to explore the primordial universe.

  14. Accelerated Block Preconditioned Gradient method for large scale wave functions calculations in Density Functional Theory

    SciTech Connect

    Fattebert, J.-L.

    2010-01-20

    An Accelerated Block Preconditioned Gradient (ABPG) method is proposed to solve electronic structure problems in Density Functional Theory. This iterative algorithm is designed to solve directly the non-linear Kohn-Sham equations for accurate discretization schemes involving a large number of degrees of freedom. It makes use of an acceleration scheme similar to what is known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of convergence for large scale applications using a finite difference discretization and multigrid preconditioning.

  15. Bayesian large-scale structure inference: initial conditions and the cosmic web

    NASA Astrophysics Data System (ADS)

    Leclercq, Florent; Wandelt, Benjamin

    2014-05-01

    We describe an innovative statistical approach for the ab initio simultaneous analysis of the formation history and morphology of the large-scale structure of the inhomogeneous Universe. Our algorithm explores the joint posterior distribution of the many millions of parameters involved via efficient Hamiltonian Markov Chain Monte Carlo sampling. We describe its application to the Sloan Digital Sky Survey data release 7 and an additional non-linear filtering step. We illustrate the use of our findings for cosmic web analysis: identification of structures via tidal shear analysis and inference of dark matter voids.

  16. Contributions to the understanding of large-scale coherent structures in developing free turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Liu, J. T. C.

    1986-01-01

    Advances in the mechanics of boundary layer flow are reported. The physical problems of large scale coherent structures in real, developing free turbulent shear flows, from the nonlinear aspects of hydrodynamic stability are addressed. The presence of fine grained turbulence in the problem, and its absence, lacks a small parameter. The problem is presented on the basis of conservation principles, which are the dynamics of the problem directed towards extracting the most physical information, however, it is emphasized that it must also involve approximations.

  17. Time evolution of parametric instability in large-scale gravitational-wave interferometers

    NASA Astrophysics Data System (ADS)

    Danilishin, Stefan L.; Vyatchanin, Sergey P.; Blair, David G.; Li, Ju; Zhao, Chunnong

    2014-12-01

    We present a study of three-mode parametric instability in large-scale gravitational-wave detectors. Previous work used a linearized model to study the onset of instability. This paper presents a nonlinear study of this phenomenon, which shows that the initial stage of an exponential rise of the amplitudes of a higher-order optical mode and the mechanical internal mode of the mirror is followed by a saturation phase, in which all three participating modes reach a new equilibrium state with constant oscillation amplitudes. Results suggest that stable operation of interferometers may be possible in the presence of such instabilities, thereby simplifying the task of suppression.

  18. The effect of background turbulence on the propagation of large-scale flames

    NASA Astrophysics Data System (ADS)

    Matalon, Moshe

    2008-12-01

    This paper is based on an invited presentation at the Conference on Turbulent Mixing and Beyond held in the Abdus Salam International Center for Theoretical Physics, Trieste, Italy (August 2007). It consists of a summary of recent investigations aimed at understanding the nature and consequences of the Darrieus-Landau instability that is prominent in premixed combustion. It describes rigorous asymptotic methodologies used to simplify the propagation problem of multi-dimensional and time-dependent premixed flames in order to understand the nonlinear evolution of hydrodynamically unstable flames. In particular, it addresses the effect of background turbulent noise on the structure and propagation of large-scale flames.

  19. A Regression Algorithm for Model Reduction of Large-Scale Multi-Dimensional Problems

    NASA Astrophysics Data System (ADS)

    Rasekh, Ehsan

    2011-11-01

    Model reduction is an approach for fast and cost-efficient modelling of large-scale systems governed by Ordinary Differential Equations (ODEs). Multi-dimensional model reduction has been suggested for reduction of the linear systems simultaneously with respect to frequency and any other parameter of interest. Multi-dimensional model reduction is also used to reduce the weakly nonlinear systems based on Volterra theory. Multiple dimensions degrade the efficiency of reduction by increasing the size of the projection matrix. In this paper a new methodology is proposed to efficiently build the reduced model based on regression analysis. A numerical example confirms the validity of the proposed regression algorithm for model reduction.

  20. Optimal Energy Measurement in Nonlinear Systems: An Application of Differential Geometry

    NASA Technical Reports Server (NTRS)

    Fixsen, Dale J.; Moseley, S. H.; Gerrits, T.; Lita, A.; Nam, S. W.

    2014-01-01

    Design of TES microcalorimeters requires a tradeoff between resolution and dynamic range. Often, experimenters will require linearity for the highest energy signals, which requires additional heat capacity be added to the detector. This results in a reduction of low energy resolution in the detector. We derive and demonstrate an algorithm that allows operation far into the nonlinear regime with little loss in spectral resolution. We use a least squares optimal filter that varies with photon energy to accommodate the nonlinearity of the detector and the non-stationarity of the noise. The fitting process we use can be seen as an application of differential geometry. This recognition provides a set of well-developed tools to extend our work to more complex situations. The proper calibration of a nonlinear microcalorimeter requires a source with densely spaced narrow lines. A pulsed laser multi-photon source is used here, and is seen to be a powerful tool for allowing us to develop practical systems with significant detector nonlinearity. The combination of our analysis techniques and the multi-photon laser source create a powerful tool for increasing the performance of future TES microcalorimeters.

  1. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  2. Large-scale search for dark-matter axions

    SciTech Connect

    Hagmann, C.A., LLNL; Kinion, D.; Stoeffl, W.; Van Bibber, K.; Daw, E.J.; McBride, J.; Peng, H.; Rosenberg, L.J.; Xin, H.; Laveigne, J.; Sikivie, P.; Sullivan, N.S.; Tanner, D.B.; Moltz, D.M.; Powell, J.; Clarke, J.; Nezrick, F.A.; Turner, M.S.; Golubev, N.A.; Kravchuk, L.V.

    1998-01-01

    Early results from a large-scale search for dark matter axions are presented. In this experiment, axions constituting our dark-matter halo may be resonantly converted to monochromatic microwave photons in a high-Q microwave cavity permeated by a strong magnetic field. Sensitivity at the level of one important axion model (KSVZ) has been demonstrated.

  3. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  4. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  5. Large-scale search for dark-matter axions

    SciTech Connect

    Kinion, D; van Bibber, K

    2000-08-30

    We review the status of two ongoing large-scale searches for axions which may constitute the dark matter of our Milky Way halo. The experiments are based on the microwave cavity technique proposed by Sikivie, and marks a ''second-generation'' to the original experiments performed by the Rochester-Brookhaven-Fermilab collaboration, and the University of Florida group.

  6. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  7. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  8. Individual Skill Differences and Large-Scale Environmental Learning

    ERIC Educational Resources Information Center

    Fields, Alexa W.; Shelton, Amy L.

    2006-01-01

    Spatial skills are known to vary widely among normal individuals. This project was designed to address whether these individual differences are differentially related to large-scale environmental learning from route (ground-level) and survey (aerial) perspectives. Participants learned two virtual environments (route and survey) with limited…

  9. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  10. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney–Hasegawa–Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  11. Large Scale Field Campaign Contributions to Soil Moisture Remote Sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Large-scale field experiments have been an essential component of soil moisture remote sensing for over two decades. They have provided test beds for both the technology and science necessary to develop and refine satellite mission concepts. The high degree of spatial variability of soil moisture an...

  12. Large-scale V/STOL testing. [in wind tunnels

    NASA Technical Reports Server (NTRS)

    Koenig, D. G.; Aiken, T. N.; Aoyagi, K.; Falarski, M. D.

    1977-01-01

    Several facets of large-scale testing of V/STOL aircraft configurations are discussed with particular emphasis on test experience in the Ames 40- by 80-foot wind tunnel. Examples of powered-lift test programs are presented in order to illustrate tradeoffs confronting the planner of V/STOL test programs. It is indicated that large-scale V/STOL wind-tunnel testing can sometimes compete with small-scale testing in the effort required (overall test time) and program costs because of the possibility of conducting a number of different tests with a single large-scale model where several small-scale models would be required. The benefits of both high- and full-scale Reynolds numbers, more detailed configuration simulation, and number and type of onboard measurements increase rapidly with scale. Planning must be more detailed at large scale in order to balance the trade-offs between the increased costs, as number of measurements and model configuration variables increase and the benefits of larger amounts of information coming out of one test.

  13. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  14. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  15. Considerations for Managing Large-Scale Clinical Trials.

    ERIC Educational Resources Information Center

    Tuttle, Waneta C.; And Others

    1989-01-01

    Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…

  16. Ecosystem resilience despite large-scale altered hydro climatic conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  17. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  18. Probabilistic Cuing in Large-Scale Environmental Search

    ERIC Educational Resources Information Center

    Smith, Alastair D.; Hood, Bruce M.; Gilchrist, Iain D.

    2010-01-01

    Finding an object in our environment is an important human ability that also represents a critical component of human foraging behavior. One type of information that aids efficient large-scale search is the likelihood of the object being in one location over another. In this study we investigated the conditions under which individuals respond to…

  19. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  20. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  1. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  2. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  3. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  4. Polymers in 2D Turbulence: Suppression of Large Scale Fluctuations

    NASA Astrophysics Data System (ADS)

    Amarouchene, Y.; Kellay, H.

    2002-08-01

    Small quantities of a long chain molecule or polymer affect two-dimensional turbulence in unexpected ways. Their presence inhibits the transfers of energy to large scales causing their suppression in the energy density spectrum. This also leads to the change of the spectral properties of a passive scalar which turns out to be highly sensitive to the presence of energy transfers.

  5. Creating a Large-Scale, Third Generation, Distance Education Course.

    ERIC Educational Resources Information Center

    Weller, Martin James

    2000-01-01

    Outlines the course development of an introductory large-scale distance education course offered via the World Wide Web at the Open University in the United Kingdom. Topics include developing appropriate student skills; maintaining quality control; facilitating easy updating of material; ensuring student interaction; and making materials…

  6. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  7. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  8. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  9. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  10. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  11. Response of Tradewind Cumuli to Large-Scale Processes.

    NASA Astrophysics Data System (ADS)

    Soong, S.-T.; Ogura, Y.

    1980-09-01

    The two-dimensional slab-symmetric numerical cloud model used by Soong and Ogura (1973) for studying the evolution of an isolated cumulus cloud is extended to investigate the statistical properties of cumulus clouds which would be generated under a given large-scale forcing composed of the horizontal advection of temperature and water vapor mixing ratio, vertical velocity, sea surface temperature and radiative cooling. Random disturbances of small amplitude are introduced into the model at low levels to provide random motion for cloud formation.The model is applied to a case of suppressed weather conditions during BOMEX for the period 22-23 June 1969 when a nearly steady state prevailed. The composited temperature and mixing ratio profiles of these two days are used as initial conditions and the time-independent large-scale forcing terms estimated from the observations are applied to the model. The result of numerical integration shows that a number of small clouds start developing after 1 h. Some of them decay quickly, but some of them develop and reach the tradewind inversion. After a few hours of simulation, the vertical profiles of the horizontally averaged temperature and moisture are found to deviate only slightly from the observed profiles, indicating that the large-scale effect and the feedback effects of clouds on temperature and mixing ratio reach an equilibrium state. The three major components of the cloud feedback effect, i.e., condensation, evaporation and vertical fluxes associated with the clouds, are determined from the model output. The vertical profiles of vertical heat and moisture fluxes in the subcloud layer in the model are found to be in general agreement with the observations.Sensitivity tests of the model are made for different magnitudes of the large-scale vertical velocity. The most striking result is that the temperature and humidity in the cloud layer below the inversion do not change significantly in spite of a relatively large

  12. Large-scale computations in analysis of structures

    SciTech Connect

    McCallen, D.B.; Goudreau, G.L.

    1993-09-01

    Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.

  13. TOPOLOGY OF A LARGE-SCALE STRUCTURE AS A TEST OF MODIFIED GRAVITY

    SciTech Connect

    Wang Xin; Chen Xuelei; Park, Changbom

    2012-03-01

    The genus of the isodensity contours is a robust measure of the topology of a large-scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias, and redshift-space distortion. We show that the growth of density fluctuations is scale dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the isodensity contours, an intrinsic measure of the topology of the large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations grow at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures grow homologously, so we expect that the genus-smoothing-scale relation is basically time independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing-scale relation should change over time. This can be used to test the gravity models with large-scale structure observations. We study the cases of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21 cm radio surveys in the near future.

  14. Sparse LSSVM in Primal Using Cholesky Factorization for Large-Scale Problems.

    PubMed

    Zhou, Shuisheng

    2016-04-01

    For support vector machine (SVM) learning, least squares SVM (LSSVM), derived by duality LSSVM (D-LSSVM), is a widely used model, because it has an explicit solution. One obvious limitation of the model is that the solution lacks sparseness, which limits it from training large-scale problems efficiently. In this paper, we derive an equivalent LSSVM model in primal space LSSVM (P-LSSVM) by the representer theorem and prove that P-LSSVM can be solved exactly at some sparse solutions for problems with low-rank kernel matrices. Two algorithms are proposed for finding the sparse (approximate) solution of P-LSSVM by Cholesky factorization. One is based on the decomposition of the kernel matrix K as P P(T) with the best low-rank matrix P approximately by pivoting Cholesky factorization. The other is based on solving P-LSSVM by approximating the Cholesky factorization of the Hessian matrix with rank-one update scheme. For linear learning problems, theoretical analysis and experimental results support that P-LSSVM can give the sparsest solutions in all SVM learners. Experimental results on some large-scale nonlinear training problems show that our algorithms, based on P-LSSVM, can converge to acceptable test accuracies at very sparse solutions with a sparsity level <1%, and even as little as 0.01%. Hence, our algorithms are a better choice for large-scale training problems. PMID:25966482

  15. Solution of nonlinear finite difference ocean models by optimization methods with sensitivity and observational strategy analysis

    NASA Technical Reports Server (NTRS)

    Schroeter, Jens; Wunsch, Carl

    1986-01-01

    The paper studies with finite difference nonlinear circulation models the uncertainties in interesting flow properties, such as western boundary current transport, potential and kinetic energy, owing to the uncertainty in the driving surface boundary condition. The procedure is based upon nonlinear optimization methods. The same calculations permit quantitative study of the importance of new information as a function of type, region of measurement and accuracy, providing a method to study various observing strategies. Uncertainty in a model parameter, the bottom friction coefficient, is studied in conjunction with uncertain measurements. The model is free to adjust the bottom friction coefficient such that an objective function is minimized while fitting a set of data to within prescribed bounds. The relative importance of the accuracy of the knowledge about the friction coefficient with respect to various kinds of observations is then quantified, and the possible range of the friction coefficients is calculated.

  16. The optimal antenna for nonlinear spectroscopy of weakly and strongly scattering nanoobjects

    NASA Astrophysics Data System (ADS)

    Schumacher, Thorsten; Brandstetter, Matthias; Wolf, Daniela; Kratzer, Kai; Hentschel, Mario; Giessen, Harald; Lippitz, Markus

    2016-04-01

    Optical nanoantennas, i.e., arrangements of plasmonic nanostructures, promise to enhance the light-matter interaction on the nanoscale. In particular, nonlinear optical spectroscopy of single nanoobjects would profit from such an antenna, as nonlinear optical effects are already weak for bulk material, and become almost undetectable for single nanoobjects. We investigate the design of optical nanoantennas for transient absorption spectroscopy in two different cases: the mechanical breathing mode of a metal nanodisk and the quantum-confined carrier dynamics in a single CdSe nanowire. In the latter case, an antenna with a resonance at the desired wavelength optimally increases the light intensity at the nanoobject. In the first case, the perturbation of the antenna by the investigated nanosystem cannot be neglected and off-resonant antennas become most efficient.

  17. Ultra-large-scale Cosmology in Next-generation Experiments with Single Tracers

    NASA Astrophysics Data System (ADS)

    Alonso, David; Bull, Philip; Ferreira, Pedro G.; Maartens, Roy; Santos, Mário G.

    2015-12-01

    Future surveys of large-scale structure will be able to measure perturbations on the scale of the cosmological horizon, and so could potentially probe a number of novel relativistic effects that are negligibly small on sub-horizon scales. These effects leave distinctive signatures in the power spectra of clustering observables and, if measurable, would open a new window on relativistic cosmology. We quantify the size and detectability of the effects for the most relevant future large-scale structure experiments: spectroscopic and photometric galaxy redshift surveys, intensity mapping surveys of neutral hydrogen, and radio continuum surveys. Our forecasts show that next-generation experiments, reaching out to redshifts z≃ 4, will not be able to detect previously undetected general-relativistic effects by using individual tracers of the density field, although the contribution of weak lensing magnification on large scales should be clearly detectable. We also perform a rigorous joint forecast for the detection of primordial non-Gaussianity through the excess power it produces in the clustering of biased tracers on large scales, finding that uncertainties of σ ({f}{{NL}})∼ 1-2 should be achievable. We study the level of degeneracy of these large-scale effects with several tracer-dependent nuisance parameters, quantifying the minimal priors on the latter that are needed for an optimal measurement of the former. Finally, we discuss the systematic effects that must be mitigated to achieve this level of sensitivity, and some alternative approaches that should help to improve the constraints. The computational tools developed to carry out this study, which requires the full-sky computation of the theoretical angular power spectra for {O}(100) redshift bins, as well as realistic models of the luminosity function, are publicly available at http://intensitymapping.physics.ox.ac.uk/codes.html.

  18. Cumulus moistening, the diurnal cycle, and large-scale tropical dynamics

    NASA Astrophysics Data System (ADS)

    Ruppert, James H., Jr.

    ) vertical motion wwtg is diagnosed based on the internal diabatic heating in the model. wwtg is then used to advect model temperature and humidity. wwtg opposes domain-averaged temperature anomalies via adiabatic warming and cooling, thereby yielding a feedback between the model diabatic heating and the large-scale column moisture source associated with large-scale vertical motion. With a control simulation that successfully replicates a regime of shallow convection similar to nature, it is found through sensitivity tests that the diurnal cycle in tropospheric radiative heating is the dominant driver of both diurnal column moisture variations and nocturnal rainfall in this regime, the latter of which agrees with previous findings by Randall et al. The diurnal cycle in SST and surface fluxes, in turn, drives the daytime convective regime, which is distinct from the nocturnal regime by its rooting in the boundary layer. A simulation in which the diurnal cycle is stretched to 48 h amplifies an important nonlinear feedback at work in the diurnal cycle, which owes to the high-amplitude diurnal cycle in column relative humidity RH. This diurnal cycle in RH limits the amount of evaporation, and hence evaporative cooling, that takes place in the cloud layer. By throttling down the diabatic cooling, the diurnal cycle throttles down the daily-mean moisture sink driven by large-scale subsidence, such that the environment drifts toward a more moist state, all else being equal. When the diurnal cycle is not present, this nonlinear moisture source is weaker, and the environment drier. This feedback rectifies diurnal moistening onto longer timescales, thereby linking the diurnal cycle to longer timescales. These findings suggest that 1) the diurnal cycle of moist convection, as observed in DYNAMO, cannot be ruled out as an column moisture source important to MJO initiation, and 2) that proper representation of the diurnal cycle is prerequisite to accurate representation of large-scale

  19. An evolutionary optimized nonlinear function to improve the linearity of transducer characteristics

    NASA Astrophysics Data System (ADS)

    Abudhahir, A.; Baskar, S.

    2008-04-01

    This paper proposes a nonlinear optimal-function-based algorithm which can be utilized to replace electronic circuitry traditionally employed to linearize the characteristics of commonly used temperature transducers such as resistance temperature detectors, thermistors and thermocouples. The function exploits ratiometric-logarithmic operation for linearization. The optimal parameters of the function are determined using a covariance matrix adopted evolutionary strategy (CMAES) algorithm. Transducers' input-output data are derived from the Yokogawa handy calibrator model CA 150 and subjected to the proposed algorithm to evaluate the performance of the method. The performance measures such as full-scale error and mean square error are considered to compare the performance of the proposed technique with other methods reported for transducers. The present linearization algorithm was implemented using LabVIEW 7.1 Professional Development System in a personal computer that provides the facility to interface with the National Instruments data acquisition module NI DAQCard PCI-6221. Experimental results reveal that the proposed evolutionary optimized nonlinear function based software linearizer does its job efficiently in a better way than that of the conventional hardware and software methods. Also, the results obtained using the CMAES algorithm are compared with the results of a real-coded genetic algorithm. The comparison shows that the CMAES algorithm is more consistent in determining the best solution for the proposed ratiometric-logarithmic function with reasonable computation time.

  20. An optimization approach for analysing nonlinear stability with transition to turbulence in fluids as an exemplar

    NASA Astrophysics Data System (ADS)

    Kerswell, R. R.; Pringle, C. C. T.; Willis, A. P.

    2014-08-01

    This article introduces and reviews recent work using a simple optimization technique for analysing the nonlinear stability of a state in a dynamical system. The technique can be used to identify the most efficient way to disturb a system such that it transits from one stable state to another. The key idea is introduced within the framework of a finite-dimensional set of ordinary differential equations (ODEs) and then illustrated for a very simple system of two ODEs which possesses bistability. Then the transition to turbulence problem in fluid mechanics is used to show how the technique can be formulated for a spatially-extended system described by a set of partial differential equations (the well-known Navier-Stokes equations). Within that context, the optimization technique bridges the gap between (linear) optimal perturbation theory and the (nonlinear) dynamical systems approach to fluid flows. The fact that the technique has now been recently shown to work in this very high dimensional setting augurs well for its utility in other physical systems.