Nonlinear large-scale optimization with WORHP
NASA Astrophysics Data System (ADS)
Nikolayzik, Tim; Büskens, Christof; Gerdts, Matthias
Nonlinear optimization has grown to a key technology in many areas of aerospace industry, e.g. satellite control, shape-optimization, aerodynamamics, trajectory planning, reentry prob-lems, interplanetary flights. One of the most extensive areas is the optimization of trajectories for aerospace applications. These problems typically are discretized optimal control problems, which leads to large sparse nonlinear optimization problems. In the end all these different problems from different areas can be described in the general formulation as a nonlinear opti-mization problem. WORHP is designed to solve nonlinear optimization problems with more then one million variables and one million constraints. WORHP uses a lot of different advanced techniques, e.g. reverse communication, to organize the optimization process as efficient and controllable by the user as possible. The solver has nine different interfaces, e.g. to MAT-LAB/SIMULINK and AMPL. Tests of WORHP had shown that WORHP is a very robust and promising solver. Several examples from space applications will be presented.
Large scale nonlinear programming for the optimization of spacecraft trajectories
NASA Astrophysics Data System (ADS)
Arrieta-Camacho, Juan Jose
. Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.
Large Scale Non-Linear Programming for PDE Constrained Optimization
VAN BLOEMEN WAANDERS, BART G.; BARTLETT, ROSCOE A.; LONG, KEVIN R.; BOGGS, PAUL T.; SALINGER, ANDREW G.
2002-10-01
Three years of large-scale PDE-constrained optimization research and development are summarized in this report. We have developed an optimization framework for 3 levels of SAND optimization and developed a powerful PDE prototyping tool. The optimization algorithms have been interfaced and tested on CVD problems using a chemically reacting fluid flow simulator resulting in an order of magnitude reduction in compute time over a black box method. Sandia's simulation environment is reviewed by characterizing each discipline and identifying a possible target level of optimization. Because SAND algorithms are difficult to test on actual production codes, a symbolic simulator (Sundance) was developed and interfaced with a reduced-space sequential quadratic programming framework (rSQP++) to provide a PDE prototyping environment. The power of Sundance/rSQP++ is demonstrated by applying optimization to a series of different PDE-based problems. In addition, we show the merits of SAND methods by comparing seven levels of optimization for a source-inversion problem using Sundance and rSQP++. Algorithmic results are discussed for hierarchical control methods. The design of an interior point quadratic programming solver is presented.
On large-scale nonlinear programming techniques for solving optimal control problems
Faco, J.L.D.
1994-12-31
The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
Large Scale Shape Optimization for Accelerator Cavities
Akcelik, Volkan; Lee, Lie-Quan; Li, Zenghai; Ng, Cho; Xiao, Li-Ling; Ko, Kwok; /SLAC
2011-12-06
We present a shape optimization method for designing accelerator cavities with large scale computations. The objective is to find the best accelerator cavity shape with the desired spectral response, such as with the specified frequencies of resonant modes, field profiles, and external Q values. The forward problem is the large scale Maxwell equation in the frequency domain. The design parameters are the CAD parameters defining the cavity shape. We develop scalable algorithms with a discrete adjoint approach and use the quasi-Newton method to solve the nonlinear optimization problem. Two realistic accelerator cavity design examples are presented.
Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.; Boggs, Paul T.; Ray, Jaideep; van Bloemen Waanders, Bart Gustaaf
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinate with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Robust large-scale parallel nonlinear solvers for simulations.
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write
Implicit solvers for large-scale nonlinear problems
Keyes, D E; Reynolds, D; Woodward, C S
2006-07-13
Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications.
The workshop on iterative methods for large scale nonlinear problems
Walker, H.F.; Pernice, M.
1995-12-01
The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.
Global smoothing and continuation for large-scale molecular optimization
More, J.J.; Wu, Zhijun
1995-10-01
We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.
A multilevel optimization of large-scale dynamic systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Sundareshan, M. K.
1976-01-01
A multilevel feedback control scheme is proposed for optimization of large-scale systems composed of a number of (not necessarily weakly coupled) subsystems. Local controllers are used to optimize each subsystem, ignoring the interconnections. Then, a global controller may be applied to minimize the effect of interconnections and improve the performance of the overall system. At the cost of suboptimal performance, this optimization strategy ensures invariance of suboptimality and stability of the systems under structural perturbations whereby subsystems are disconnected and again connected during operation.
Large-Scale Optimization for Bayesian Inference in Complex Systems
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their
Geospatial Optimization of Siting Large-Scale Solar Projects
Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.
2014-03-01
Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.
The GRG approach for large-scale optimization
Drud, A.
1994-12-31
The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.
Optimal Wind Energy Integration in Large-Scale Electric Grids
NASA Astrophysics Data System (ADS)
Albaijat, Mohammad H.
The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model
Nonlinear Generation of shear flows and large scale magnetic fields by small scale
NASA Astrophysics Data System (ADS)
Aburjania, G.
2009-04-01
EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge
Wang, Jian; Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping
2014-01-01
A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180
Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping
2014-01-01
A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180
Wang, Jian; Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping
2014-01-01
A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality.
Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.
Chen, Mou; Tao, Gang
2016-08-01
In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.
Iterative methods for large scale nonlinear and linear systems. Final report, 1994--1996
Walker, H.F.
1997-09-01
The major goal of this research has been to develop improved numerical methods for the solution of large-scale systems of linear and nonlinear equations, such as occur almost ubiquitously in the computational modeling of physical phenomena. The numerical methods of central interest have been Krylov subspace methods for linear systems, which have enjoyed great success in many large-scale applications, and newton-Krylov methods for nonlinear problems, which use Krylov subspace methods to solve approximately the linear systems that characterize Newton steps. Krylov subspace methods have undergone a remarkable development over the last decade or so and are now very widely used for the iterative solution of large-scale linear systems, particularly those that arise in the discretization of partial differential equations (PDEs) that occur in computational modeling. Newton-Krylov methods have enjoyed parallel success and are currently used in many nonlinear applications of great scientific and industrial importance. In addition to their effectiveness on important problems, Newton-Krylov methods also offer a nonlinear framework within which to transfer to the nonlinear setting any advances in Krylov subspace methods or preconditioning techniques, or new algorithms that exploit advanced machine architectures. This research has resulted in a number of improved Krylov and Newton-Krylov algorithms together with applications of these to important linear and nonlinear problems.
Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design
Liao, Ben-Shan; Bai, Zhaojun; Lee, Lie-Quan; Ko, Kwok; /SLAC
2006-09-28
A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.
Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations
NASA Astrophysics Data System (ADS)
Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila
2016-07-01
It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.
Non-linear description of massive neutrinos in the framework of large-scale structure formation
NASA Astrophysics Data System (ADS)
Dupuy, Hélène
2016-10-01
There is now no doubt that neutrinos are massive particles fully involved in the non-linear growth of the large-scale structure of the universe. A problem is that they are particularly difficult to include in cosmological models because the equations describing their behavior in the non-linear regime are cumbersome and difficult to handle. In this manuscript I present a new method allowing to deal with massive neutrinos in a very simple way, based on basic conservation laws. This method is still valid in the non-linear regime. The key idea is to describe neutrinos as a collection of single-flow fluids instead of seeing them as a single hot multi-flow fluid. In this framework, the time evolution of neutrinos is encoded in fluid equations describing macroscopic fields, just as what is done for cold dark matter. Although valid up to shell-crossing only, this approach is a further step towards a fully non-linear treatment of the dynamical evolution of neutrinos in the framework of large-scale structure growth.
Tensor-Krylov methods for solving large-scale systems of nonlinear equations.
Bader, Brett William
2004-08-01
This paper develops and investigates iterative tensor methods for solving large-scale systems of nonlinear equations. Direct tensor methods for nonlinear equations have performed especially well on small, dense problems where the Jacobian matrix at the solution is singular or ill-conditioned, which may occur when approaching turning points, for example. This research extends direct tensor methods to large-scale problems by developing three tensor-Krylov methods that base each iteration upon a linear model augmented with a limited second-order term, which provides information lacking in a (nearly) singular Jacobian. The advantage of the new tensor-Krylov methods over existing large-scale tensor methods is their ability to solve the local tensor model to a specified accuracy, which produces a more accurate tensor step. The performance of these methods in comparison to Newton-GMRES and tensor-GMRES is explored on three Navier-Stokes fluid flow problems. The numerical results provide evidence that tensor-Krylov methods are generally more robust and more efficient than Newton-GMRES on some important and difficult problems. In addition, the results show that the new tensor-Krylov methods and tensor- GMRES each perform better in certain situations.
NASA Astrophysics Data System (ADS)
Li, Judith Yue; Kokkinaki, Amalia; Ghorbanidehno, Hojat; Darve, Eric F.; Kitanidis, Peter K.
2015-12-01
Reservoir monitoring aims to provide snapshots of reservoir conditions and their uncertainties to assist operation management and risk analysis. These snapshots may contain millions of state variables, e.g., pressures and saturations, which can be estimated by assimilating data in real time using the Kalman filter (KF). However, the KF has a computational cost that scales quadratically with the number of unknowns, m, due to the cost of computing and storing the covariance and Jacobian matrices, along with their products. The compressed state Kalman filter (CSKF) adapts the KF for solving large-scale monitoring problems. The CSKF uses N preselected orthogonal bases to compute an accurate rank-N approximation of the covariance that is close to the optimal spectral approximation given by SVD. The CSKF has a computational cost that scales linearly in m and uses an efficient matrix-free approach that propagates uncertainties using N + 1 forward model evaluations, where N≪m. Here we present a generalized CSKF algorithm for nonlinear state estimation problems such as CO2 monitoring. For simultaneous estimation of multiple types of state variables, the algorithm allows selecting bases that represent the variability of each state type. Through synthetic numerical experiments of CO2 monitoring, we show that the CSKF can reproduce the Kalman gain accurately even for large compression ratios (m/N). For a given computational cost, the CSKF uses a robust and flexible compression scheme that gives more reliable uncertainty estimates than the ensemble Kalman filter, which may display loss of ensemble variability leading to suboptimal uncertainty estimates.
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.
2003-01-01
An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.
A study of the parallel algorithm for large-scale DC simulation of nonlinear systems
NASA Astrophysics Data System (ADS)
Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel
Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.
CMB lensing bispectrum from nonlinear growth of the large scale structure
NASA Astrophysics Data System (ADS)
Namikawa, Toshiya
2016-06-01
We discuss detectability of the nonlinear growth of the large-scale structure in the cosmic microwave background (CMB) lensing. The lensing signals involved in the CMB fluctuations have been measured from multiple CMB experiments, such as Atacama Cosmology Telescope (ACT), Planck, POLARBEAR, and South Pole Telescope (SPT). The reconstructed CMB lensing signals are useful to constrain cosmology via their angular power spectrum, while detectability and cosmological application of their bispectrum induced by the nonlinear evolution are not well studied. Extending the analytic estimate of the galaxy lensing bispectrum presented by Takada and Jain (2004) to the CMB case, we show that even near term CMB experiments such as Advanced ACT, Simons Array and SPT3G could detect the CMB lensing bispectrum induced by the nonlinear growth of the large-scale structure. In the case of the CMB Stage-IV, we find that the lensing bispectrum is detectable at ≳50 σ statistical significance. This precisely measured lensing bispectrum has rich cosmological information, and could be used to constrain cosmology, e.g., the sum of the neutrino masses and the dark-energy properties.
Das, B; Meirovitch, H; Navon, I M
2003-07-30
Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem.
Towards a self-consistent halo model for the nonlinear large-scale structure
NASA Astrophysics Data System (ADS)
Schmidt, Fabian
2016-03-01
The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: (i) they do not enforce the stress-energy conservation of matter; (ii) they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model (EHM) that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed on large scales, and results of the perturbation theory and the effective field theory can, in principle, be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written here, this approach still does not describe the transition regime between perturbation theory and halo scales realistically, which is left as an open problem. We also show explicitly that, when implemented consistently, halo model predictions do not depend on any properties of low-mass halos that are smaller than the scales of interest.
Classification of large-scale stellar spectra based on the non-linearly assembling learning machine
NASA Astrophysics Data System (ADS)
Liu, Zhongbao; Song, Lipeng; Zhao, Wenjuan
2016-02-01
An important problem to be solved of traditional classification methods is they cannot deal with large-scale classification because of very high time complexity. In order to solve above problem, inspired by the thinking of collaborative management, the non-linearly assembling learning machine (NALM) is proposed and used in the large-scale stellar spectral classification. In NALM, the large-scale dataset is firstly divided into several subsets, and then the traditional classifiers such as support vector machine (SVM) runs on the subset, finally, the classification results on each subset are assembled and the overall classification decision is obtained. In comparative experiments, we investigate the performance of NALM in the stellar spectral subclasses classification compared with SVM. We apply SVM and NALM respectively to classify the four subclasses of K-type spectra, three subclasses of F-type spectra and three subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS). The comparative experiment results show that the performance of NALM is much better than SVM in view of the classification accuracy and the computation time.
Nonlinear random response of large-scale sparse finite element plate bending problems
NASA Astrophysics Data System (ADS)
Chokshi, Swati
Acoustic fatigue is one of the major design considerations for skin panels exposed to high levels of random pressure at subsonic/supersonic/hypersonic speeds. The nonlinear large deflection random response of the single-bay panels aerospace structures subjected to random excitations at various sound pressure levels (SPLs) is investigated. The nonlinear responses of plate analyses are limited to determine the root-mean-square displacement under uniformly distributed pressure random loads. Efficient computational technologies like sparse storage schemes and parallel computation are proposed and incorporated to solve large-scale, nonlinear large deflection random vibration problems for both types of loading cases: (1) synchronized in time and (2) unsynchronized and statistically uncorrelated in time. For the first time, large scale plate bending problems subjected to unsynchronized load are solved using parallel computing capabilities to account for computational burden due to the simulation of the unsynchronized random pressure fluctuations. The main focus of the research work is placed upon computational issues involved in the nonlinear modal methodologies. A nonlinear FEM method in time domain is incorporated with the Monte Carlo simulation and sparse computational technologies, including the efficient sparse Subspace Eigen-solutions are presented and applied to accurately determine the random response with a refined, large finite element mesh for the first time. Sparse equation solver and sparse matrix operations embedded inside the subspace Eigen-solution algorithms are also exploited. The approach uses the von-Karman nonlinear strain-displacement relations and the classical plate theory. In the proposed methodologies, the solution for a small number (say less than 100) of lowest linear, sparse Eigen-pairs need to be solved for only once, in order to transform nonlinear large displacements from the conventional structural degree-of-freedom (dof) into the modal
DC simulator of large-scale nonlinear systems for parallel processors
NASA Astrophysics Data System (ADS)
Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel
In this paper it is shown how the idea of the BBD decomposition of large-scale nonlinear systems can be implemented in a parallel DC circuit simulation algorithm. Usually, the BBD nonlinear circuits decomposition was used together with the multi-level Newton-Raphson iterative process. We propose the simulation consisting in the circuit decomposition and the process parallelization on the single level only. This block-parallel approach may give a considerable profit in simulation time though it is strongly dependent on the system topology and, of course, on the processor type. The paper presents the architecture of the decomposition-based algorithm, explains details of its implementation, including two steps of the one level bypassing techniques and discusses a construction of the dedicated benchmarks for this simulation software.
Nonlinear Seismic Correlation Analysis of the JNES/NUPEC Large-Scale Piping System Tests.
Nie,J.; DeGrassi, G.; Hofmayer, C.; Ali, S.
2008-06-01
The Japan Nuclear Energy Safety Organization/Nuclear Power Engineering Corporation (JNES/NUPEC) large-scale piping test program has provided valuable new test data on high level seismic elasto-plastic behavior and failure modes for typical nuclear power plant piping systems. The component and piping system tests demonstrated the strain ratcheting behavior that is expected to occur when a pressurized pipe is subjected to cyclic seismic loading. Under a collaboration agreement between the US and Japan on seismic issues, the US Nuclear Regulatory Commission (NRC)/Brookhaven National Laboratory (BNL) performed a correlation analysis of the large-scale piping system tests using derailed state-of-the-art nonlinear finite element models. Techniques are introduced to develop material models that can closely match the test data. The shaking table motions are examined. The analytical results are assessed in terms of the overall system responses and the strain ratcheting behavior at an elbow. The paper concludes with the insights about the accuracy of the analytical methods for use in performance assessments of highly nonlinear piping systems under large seismic motions.
Toward Optimal and Scalable Dimension Reduction Methods for large-scale Bayesian Inversions
NASA Astrophysics Data System (ADS)
Bousserez, N.; Henze, D. K.
2015-12-01
Many inverse problems in geophysics are solved within the Bayesian framework, in which a prior probability density function of a quantity of interest is optimally updated using newly available observations. A maximum likelihood of the posterior probability density function is estimated using a model of the physics that relates the variables to be optimized to the observations. However, in many practical situations the number of observations is much smaller than the number of variables estimated, which leads to an ill-posed problem. In practice, this means that the data are informative only in a subspace of the initial space. It is both of theoretical and practical interest to characterize this "data-informed" subspace, since it allows a simple interpretation of the inverse solution and its uncertainty, but can also dramatically reduce the computational cost of the optimization by reducing the size of the problem. In this presentation the formalism of dimension reduction in Bayesian methods will be introduced, and different optimality criteria will be discussed (e.g., minimum error variances, maximum degree of freedom for signal). For each criterion, an optimal design for the reduced Bayesian problem will be proposed and compared with other suboptimal approaches. A significant advantage of our method is its high scalability owing to an efficient parallel implementation, making it very attractive for large-scale inverse problems. Numerical results from an Observation Simulation System Experiment (OSSE) consisting of a high spatial resolution (0.5°x0.7°) source inversion of methane over North America using observations from the Greenhouse gases Observing SATellite (GOSAT) instrument and the GEOS-Chem chemistry-transport model will illustrate the computational efficiency of our approach. Although only linear models are considered in this study, possible extensions to the non-linear case will also be discussed
NASA Astrophysics Data System (ADS)
Wang, J.; Cai, X.
2007-12-01
A water resources system can be defined as a large-scale spatial system, within which distributed ecological system interacts with the stream network and ground water system. Water resources management, the causative factors and hence the solutions to be developed have a significant spatial dimension. This motivates a modeling analysis of water resources management within a spatial analytical framework, where data is usually geo- referenced and in the form of a map. One of the important functions of Geographic information systems (GIS) is to identify spatial patterns of environmental variables. The role of spatial patterns in water resources management has been well established in the literature particularly regarding how to design better spatial patterns for satisfying the designated objectives of water resources management. Evolutionary algorithms (EA) have been demonstrated to be successful in solving complex optimization models for water resources management due to its flexibility to incorporate complex simulation models in the optimal search procedure. The idea of combining GIS and EA motivates the development and application of spatial evolutionary algorithms (SEA). SEA assimilates spatial information into EA, and even changes the representation and operators of EA. In an EA used for water resources management, the mathematical optimization model should be modified to account the spatial patterns; however, spatial patterns are usually implicit, and it is difficult to impose appropriate patterns to spatial data. Also it is difficult to express complex spatial patterns by explicit constraints included in the EA. The GIS can help identify the spatial linkages and correlations based on the spatial knowledge of the problem. These linkages are incorporated in the fitness function for the preference of the compatible vegetation distribution. Unlike a regular GA for spatial models, the SEA employs a special hierarchical hyper-population and spatial genetic operators
Nonlinear detection of large-scale transitions in Plio-Pleistocene African climate
NASA Astrophysics Data System (ADS)
Donges, J. F.; Donner, R. V.; Trauth, M. H.; Marwan, N.; Schellnhuber, H. J.; Kurths, J.
2011-12-01
Potential paleoclimatic driving mechanisms acting on human development present an open problem of cross-disciplinary scientific interest. The analysis of paleoclimate archives encoding the environmental variability in East Africa during the last 5 Ma (million years) has triggered an ongoing debate about possible candidate processes and evolutionary mechanisms. In this work, we apply a novel nonlinear statistical technique, recurrence network analysis, to three distinct marine records of terrigenous dust flux. Our method enables us to identify three epochs with transitions between qualitatively different types of environmental variability in North and East Africa during the (i) Mid-Pliocene (3.35-3.15 Ma BP (before present)), (ii) Early Pleistocene (2.25-1.6 Ma BP), and (iii) Mid-Pleistocene (1.1-0.7 Ma BP). A deeper examination of these transition periods reveals potential climatic drivers, including (i) large-scale changes in ocean currents due to a spatial shift of the Indonesian throughflow in combination with an intensification of Northern Hemisphere glaciation, (ii) a global reorganization of the atmospheric Walker circulation induced in the tropical Pacific and Indian Ocean, and (iii) shifts in the dominating temporal variability pattern of glacial activity during the Mid-Pleistocene, respectively. A statistical reexamination of the available fossil record demonstrates a remarkable coincidence between the detected transition periods and major steps in hominin evolution. This suggests that the observed shifts between more regular and more erratic environmental variability have acted as a trigger for rapid change in the development of humankind in Africa.
Friedman, A.
1996-12-01
The summer program in Large Scale Optimization concentrated largely on process engineering, aerospace engineering, inverse problems and optimal design, and molecular structure and protein folding. The program brought together application people, optimizers, and mathematicians with interest in learning about these topics. Three proceedings volumes are being prepared. The year in Materials Sciences deals with disordered media and percolation, phase transformations, composite materials, microstructure; topological and geometric methods as well as statistical mechanics approach to polymers (included were Monte Carlo simulation for polymers); miscellaneous other topics such as nonlinear optical material, particulate flow, and thin film. All these activities saw strong interaction among material scientists, mathematicians, physicists, and engineers. About 8 proceedings volumes are being prepared.
Optimization of large-scale heterogeneous system-of-systems models.
Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.
2012-01-01
Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.
Structure of CIMS in large-scale continuous manufacturing industry and its optimization strategy
NASA Astrophysics Data System (ADS)
Yao, Jianchu; Wang, Gaofeng; Wang, Boxing; Zhou, Ji; Yu, Jun
1995-08-01
This paper focuses on the large scale petroleum refinery manufacturing industry and has analyzed the characteristics and functional requirements of CIMS in continuous process industries. Then it compares the continuous manufacturing industry with the discrete manufacturing industry on CIMS conceptual model, and presents the functional model frame and key technologies of CIPS. The paper also proposes the optimization model and solution strategy for the CIMS in continuous industry.
Luo, Ping; Yin, Peiyuan; Zhang, Weijian; Zhou, Lina; Lu, Xin; Lin, Xiaohui; Xu, Guowang
2016-03-11
Liquid chromatography-mass spectrometry (LC-MS) is now a main stream technique for large-scale metabolic phenotyping to obtain a better understanding of genomic functions. However, repeatability is still an essential issue for the LC-MS based methods, and convincing strategies for long time analysis are urgently required. Our former reported pseudotargeted method which combines nontargeted and targeted analyses, is proved to be a practical approach with high-quality and information-rich data. In this study, we developed a comprehensive strategy based on the pseudotargeted analysis by integrating blank-wash, pooled quality control (QC) sample, and post-calibration for the large-scale metabolomics study. The performance of strategy was optimized from both pre- and post-acquisition sections including the selection of QC samples, insertion frequency of QC samples, and post-calibration methods. These results imply that the pseudotargeted method is rather stable and suitable for large-scale study of metabolic profiling. As a proof of concept, the proposed strategy was applied to the combination of 3 independent batches within a time span of 5 weeks, and generated about 54% of the features with coefficient of variations (CV) below 15%. Moreover, the stability and maximal capability of a single analytical batch could be extended to at least 282 injections (about 110h) while still providing excellent stability, the CV of 63% metabolic features was less than 15%. Taken together, the improved repeatability of our strategy provides a reliable protocol for large-scale metabolomics studies.
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Large scale test simulations using the Virtual Environment for Test Optimization (VETO)
Klenke, S.E.; Heffelfinger, S.R.; Bell, H.J.; Shierling, C.L.
1997-10-01
The Virtual Environment for Test Optimization (VETO) is a set of simulation tools under development at Sandia to enable test engineers to do computer simulations of tests. The tool set utilizes analysis codes and test information to optimize design parameters and to provide an accurate model of the test environment which aides in the maximization of test performance, training, and safety. Previous VETO effort has included the development of two structural dynamics simulation modules that provide design and optimization tools for modal and vibration testing. These modules have allowed test engineers to model and simulate complex laboratory testing, to evaluate dynamic response behavior, and to investigate system testability. Further development of the VETO tool set will address the accurate modeling of large scale field test environments at Sandia. These field test environments provide weapon system certification capabilities and have different simulation requirements than those of laboratory testing.
Integration of Large-Scale Optimization and Game Theory for Sustainable Water Quality Management
NASA Astrophysics Data System (ADS)
Tsao, J.; Li, J.; Chou, C.; Tung, C.
2009-12-01
Sustainable water quality management requires total mass control in pollutant discharge based on both the principles of not exceeding assimilative capacity in a river and equity among generations. The stream assimilative capacity is the carrying capacity of a river for the maximum waste load without violating the water quality standard and the spirit of total mass control is to optimize the waste load allocation in subregions. For the goal of sustainable watershed development, this study will use large-scale optimization theory to optimize the profit, and find the marginal values of loadings as reference of the fair price and then the best way to get the equilibrium by water quality trading for the whole of watershed will be found. On the other hand, game theory plays an important role to maximize both individual and entire profits. This study proves the water quality trading market is available in some situation, and also makes the whole participants get a better outcome.
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft
Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)
2002-01-01
In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."
Asymptotically Optimal Transmission Policies for Large-Scale Low-Power Wireless Sensor Networks
I. Ch. Paschalidis; W. Lai; D. Starobinski
2007-02-01
We consider wireless sensor networks with multiple gateways and multiple classes of traffic carrying data generated by different sensory inputs. The objective is to devise joint routing, power control and transmission scheduling policies in order to gather data in the most efficient manner while respecting the needs of different sensing tasks (fairness). We formulate the problem as maximizing the utility of transmissions subject to explicit fairness constraints and propose an efficient decomposition algorithm drawing upon large-scale decomposition ideas in mathematical programming. We show that our algorithm terminates in a finite number of iterations and produces a policy that is asymptotically optimal at low transmission power levels. Furthermore, we establish that the utility maximization problem we consider can, in principle, be solved in polynomial time. Numerical results show that our policy is near-optimal, even at high power levels, and far superior to the best known heuristics at low power levels. We also demonstrate how to adapt our algorithm to accommodate energy constraints and node failures. The approach we introduce can efficiently determine near-optimal transmission policies for dramatically larger problem instances than an alternative enumeration approach.
HGO-based decentralised indirect adaptive fuzzy control for a class of large-scale nonlinear systems
NASA Astrophysics Data System (ADS)
Huang, Yi-Shao; Chen, Xiaoxin; Zhou, Shao-Wu; Yu, Ling-Li; Wang, Zheng-Wu
2012-06-01
In this article, a novel high gain observer (HGO)-based decentralised indirect adaptive fuzzy controller is developed for a class of uncertain affine large-scale nonlinear systems. By the combination of fuzzy logic systems and an HGO, the state variables are not required to be measurable. The proposed feedback and adaptation mechanisms guarantee that each subsystem is able to adaptively compensate for interconnections and disturbances with unknown bounds. It is ascertained using a singular perturbation method that all the signals of the closed-loop large-scale system stand uniformly ultimately bounded and the tracking errors converge to tunable neighbourhoods of the origin. Simulation results of correlated double inverted pendulums substantiate the effectiveness of the proposed controller.
Optimization and Scalability of an Large-scale Earthquake Simulation Application
NASA Astrophysics Data System (ADS)
Cui, Y.; Olsen, K. B.; Hu, Y.; Day, S.; Dalguer, L. A.; Minster, B.; Moore, R.; Zhu, J.; Maechling, P.; Jordan, T.
2006-12-01
In 2004, the Southern California Earthquake Center (SCEC) initiated a major large-scale earthquake simulation, called TeraShake. The TeraShake propagated seismic waves across a domain of 600 km by 300 km by 80 km at 200 meter resolution and 1.8 billion grid points, some of the largest and most detailed earthquake simulations of the southern San Andres fault. The TeraShake 1 code is based on a 4th order FD Anelastic Wave Propagation Model (AWM), developed by K. Olsen, using a kinematic source description. The enhanced TeraShake 2 then added a new physics-based dynamic component, with the new capability to very- large scale earthquake simulations. A high 100 m resolution was used to generate a physically realistic earthquake source description for the San Andreas fault. The executions of very-large scale TeraShake 2 simulations with the high-resolution dynamic source used up to 1024 processors on the TeraGrid, adding more than 60 TB of simulation output in the 168 TB SCEC digital library, managed by the SDSC Storage Resource Broker (SRB) at SDSC. The execution of these large simulations requires high levels of expertise and resource coordination. We examine the lessons learned in enabling the execution of the TeraShake application. In particular, we look at challenges imposed for the single-processor optimization of the application performance, optimization of the I/O handling and optimization of the run initialization, and the execution of the data-intensive simulations. The TeraShake code was optimized to improve scalability to 2048 processors, with a parallel efficiency of 84%. Our latest TeraShake simulation sustains 1 Teraflop/s performance, completing a simulation in less than 9 hours on the SDSC Datastar. This is more than 10 times faster than previous TeraShake simulations. Some of the TeraShake production simulations were carried out using grid computing resources, including the execution on NCSA TeraGrid resources, and run-time archiving outputs onto SDSC
a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks
NASA Astrophysics Data System (ADS)
Bottacin-Busolin, A.; Worman, A. L.
2013-12-01
A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time. PMID:24136425
SfM with MRFs: Discrete-Continuous Optimization for Large-Scale Structure from Motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2012-10-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time. PMID:23045369
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, M_{N}, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P_{N}, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M_{N} algorithm that do not appear for the P_{N} algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M_{N} to P_{N} decreases.
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used asmore » test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.« less
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
Characterizing the nonlinear growth of large-scale structure in the Universe
Coles; Chiang
2000-07-27
The local Universe displays a rich hierarchical pattern of galaxy clusters and superclusters. The early Universe, however, was almost smooth, with only slight 'ripples' as seen in the cosmic microwave background radiation. Models of the evolution of cosmic structure link these observations through the effect of gravity, because the small initially overdense fluctuations are predicted to attract additional mass as the Universe expands. During the early stages of this expansion, the ripples evolve independently, like linear waves on the surface of deep water. As the structures grow in mass, they interact with each other in nonlinear ways, more like waves breaking in shallow water. We have recently shown how cosmic structure can be characterized by phase correlations associated with these nonlinear interactions, but it was not clear how to use that information to obtain quantitative insights into the growth of structures. Here we report a method of revealing phase information, and show quantitatively how this relates to the formation of filaments, sheets and clusters of galaxies by nonlinear collapse. We develop a statistical method based on information entropy to separate linear from nonlinear effects, and thereby are able to disentangle those aspects of galaxy clustering that arise from initial conditions (the ripples) from the subsequent dynamical evolution.
Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks
Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.
2010-01-01
Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster headsmore » to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less
The optimization of large-scale density gradient isolation of human islets.
Robertson, G S; Chadwick, D R; Contractor, H; James, R F; London, N J
1993-01-01
The use of the COBE 2991 cell processor (COBE Laboratories, Colorado) for large-scale islet purification using discontinuous density gradients has been widely adopted. It minimizes many of the problems such as wall effects, normally encountered during centrifugation, and avoids the vortexing at interfaces that occurs during acceleration and deceleration by allowing the gradient to be formed and the islet-containing interface to be collected while continuing to spin. We have produced cross-sectional profiles of the 2991 bag during spinning which allow the area of interfaces in such step gradients to be calculated. This allows the volumes of the gradient media layers loaded on the machine to be adjusted in order to maximize the area of the gradient interfaces. However, even using the maximal areas possible (144.5 cm2), clogging of tissue at such interfaces limits the volume of digest which can be separated on one gradient to 15 ml. We have shown that a linear continuous density gradient can be produced within the 2991 bag, that allows as much as 40 ml of digest to be successfully purified. Such a system combines the intrinsic advantages of the 2991 with those of continuous density gradients and provides the optimal method for density-dependent islet purification. PMID:8219265
Library for Nonlinear Optimization
2001-10-09
OPT++ is a C++ object-oriented library for nonlinear optimization. This incorporates an improved implementation of an existing capability and two new algorithmic capabilities based on existing journal articles and freely available software.
A large-scale nonlinear eigensolver for the analysis of dispersive nanostructures
NASA Astrophysics Data System (ADS)
Guo, Hua; Arbenz, Peter; Oswald, Benedikt
2013-08-01
We introduce the electromagnetic eigenmodal solver code FemaxxNano for the numerical analysis of nanometer structured optical systems, a scientific field generally know as nanooptics. FemaxxNano solves the electric field vector wave equation and calculates the electromagnetic eigenmodes of nearly arbitrary 3-dimensional resonators, embedded either in free-space, vacuum or a background medium. Here, the study of the interaction between nanometer sized metallic structures and light is at the heart of the physical problem. Since metals in the optical region of the electromagnetic spectrum are highly dispersive and, thus, dissipative, dielectric media, we eventually obtain a nonlinear eigenvalue problem. We discretize the electromagnetic eigenvalue problem with the finite element method (FEM) in 3-dimensional space and on unstructured tetrahedral grids. We introduce a fully iterative scheme to solve the nonlinear problem for complex coefficient matrices that depend on wavelength. We investigate the properties of the algorithm in detail and demonstrate its performance by analyzing a nanometer sized optical dimer structure, a specific type of optical antenna, on distributed-memory parallel computers.
The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization
Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie
2016-01-01
In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables). PMID:27780245
Bayesian non-linear large-scale structure inference of the Sloan Digital Sky Survey Data Release 7
NASA Astrophysics Data System (ADS)
Jasche, Jens; Kitaura, Francisco S.; Li, Cheng; Enßlin, Torsten A.
2010-11-01
In this work, we present the first non-linear, non-Gaussian full Bayesian large-scale structure analysis of the cosmic density field conducted so far. The density inference is based on the Sloan Digital Sky Survey (SDSS) Data Release 7, which covers the northern galactic cap. We employ a novel Bayesian sampling algorithm, which enables us to explore the extremely high dimensional non-Gaussian, non-linear lognormal Poissonian posterior of the three-dimensional density field conditional on the data. These techniques are efficiently implemented in the Hamiltonian Density Estimation and Sampling (HADES) computer algorithm and permit the precise recovery of poorly sampled objects and non-linear density fields. The non-linear density inference is performed on a 750-Mpc cube with roughly 3-Mpc grid resolution, while accounting for systematic effects, introduced by survey geometry and selection function of the SDSS, and the correct treatment of a Poissonian shot noise contribution. Our high-resolution results represent remarkably well the cosmic web structure of the cosmic density field. Filaments, voids and clusters are clearly visible. Further, we also conduct a dynamical web classification and estimate the web-type posterior distribution conditional on the SDSS data.
Nonlinear dynamical systems and control for large-scale, hybrid, and network systems
NASA Astrophysics Data System (ADS)
Hui, Qing
In this dissertation, we present several main research thrusts involving thermodynamic stabilization via energy dissipating hybrid controllers and nonlinear control of network systems. Specifically, a novel class of fixed-order, energy-based hybrid controllers is presented as a means for achieving enhanced energy dissipation in Euler-Lagrange, lossless, and dissipative dynamical systems. These dynamic controllers combine a logical switching architecture with continuous dynamics to guarantee that the system plant energy is strictly decreasing across switching. In addition, we construct hybrid dynamic controllers that guarantee the closed-loop system is consistent with basic thermodynamic principles. In particular, the existence of an entropy function for the closed-loop system is established that satisfies a hybrid Clausius-type inequality. Special cases of energy-based hybrid controllers involving state-dependent switching are described, and the framework is applied to aerospace system models. The overall framework demonstrates that energy-based hybrid resetting controllers provide an extremely efficient mechanism for dissipating energy in nonlinear dynamical systems. Next, we present finite-time coordination controllers for multiagent network systems. Recent technological advances in communications and computation have spurred a broad interest in autonomous, adaptable vehicle formations. Distributed decision-making for coordination of networks of dynamic agents addresses a broad area of applications including cooperative control of unmanned air vehicles, microsatellite clusters, mobile robotics, and congestion control in communication networks. In this dissertation we focus on finite-time consensus protocols for networks of dynamic agents with undirected information flow. The proposed controller architectures are predicated on the recently developed notion of system thermodynamics resulting in thermodynamically consistent continuous controller architectures
Characteristic-based non-linear simulation of large-scale standing-wave thermoacoustic engine.
Abd El-Rahman, Ahmed I; Abdel-Rahman, Ehab
2014-08-01
A few linear theories [Swift, J. Acoust. Soc. Am. 84(4), 1145-1180 (1988); Swift, J. Acoust. Soc. Am. 92(3), 1551-1563 (1992); Olson and Swift, J. Acoust. Soc. Am. 95(3), 1405-1412 (1994)] and numerical models, based on low-Mach number analysis [Worlikar and Knio, J. Comput. Phys. 127(2), 424-451 (1996); Worlikar et al., J. Comput. Phys. 144(2), 199-324 (1996); Hireche et al., Canadian Acoust. 36(3), 164-165 (2008)], describe the flow dynamics of standing-wave thermoacoustic engines, but almost no simulation results are available that enable the prediction of the behavior of practical engines experiencing significant temperature gradient between the stack ends and thus producing large-amplitude oscillations. Here, a one-dimensional non-linear numerical simulation based on the method of characteristics to solve the unsteady compressible Euler equations is reported. Formulation of the governing equations, implementation of the numerical method, and application of the appropriate boundary conditions are presented. The calculation uses explicit time integration along with deduced relationships, expressing the friction coefficient and the Stanton number for oscillating flow inside circular ducts. Helium, a mixture of Helium and Argon, and Neon are used for system operation at mean pressures of 13.8, 9.9, and 7.0 bars, respectively. The self-induced pressure oscillations are accurately captured in the time domain, and then transferred into the frequency domain, distinguishing the pressure signals into fundamental and harmonic responses. The results obtained are compared with reported experimental works [Swift, J. Acoust. Soc. Am. 92(3), 1551-1563 (1992); Olson and Swift, J. Acoust. Soc. Am. 95(3), 1405-1412 (1994)] and the linear theory, showing better agreement with the measured values, particularly in the non-linear regime of the dynamic pressure response. PMID:25096100
Design optimization studies for large-scale contoured beam deployable satellite antennas
NASA Astrophysics Data System (ADS)
Tanaka, Hiroaki
2006-05-01
Satellite communications systems over the past two decades have become more sophisticated and evolved new applications that require much higher flux densities. These new requirements to provide high data rate services to very small user terminals have in turn led to the need for large aperture space antenna systems with higher gain. Conventional parabolic reflectors constructed of metal have become, over time, too massive to support these new missions in a cost effective manner and also have posed problems of fitting within the constrained volume of launch vehicles. Designers of new space antenna systems have thus begun to explore new design options. These design options for advanced space communications networks include such alternatives as inflatable antennas using polyimide materials, antennas constructed of piezo-electric materials, phased array antenna systems (especially in the EHF bands) and deployable antenna systems constructed of wire mesh or cabling systems. This article updates studies being conducted in Japan of such deployable space antenna systems [H. Tanaka, M.C. Natori, Shape control of space antennas consisting of cable networks, Acta Astronautica 55 (2004) 519-527]. In particular, this study shows how the design of such large-scale deployable antenna systems can be optimized based on various factors including the frequency bands to be employed with such innovative reflector design. In particular, this study investigates how contoured beam space antennas can be effective by constructed out of so-called cable networks or mesh-like reflectors. This design can be accomplished via "plane wave synthesis" and by the "force density method" and then to iterate the design to achieve the optimum solution. We have concluded that the best design is achieved by plane wave synthesis. Further, we demonstrate that the nodes on the reflector are best determined by a pseudo-inverse calculation of the matrix that can be interpolated so as to achieve the minimum
Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control
NASA Astrophysics Data System (ADS)
Kamyar, Reza
In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to
NASA Astrophysics Data System (ADS)
Nikitenkova, S.; Singh, N.; Stepanyants, Y.
2015-12-01
In this paper, we revisit the problem of modulation stability of quasi-monochromatic wave-trains propagating in a media with the double dispersion occurring both at small and large wavenumbers. We start with the shallow-water equations derived by Shrira [Izv., Acad. Sci., USSR, Atmos. Ocean. Phys. (Engl. Transl.) 17, 55-59 (1981)] which describes both surface and internal long waves in a rotating fluid. The small-scale (Boussinesq-type) dispersion is assumed to be weak, whereas the large-scale (Coriolis-type) dispersion is considered as without any restriction. For unidirectional waves propagating in one direction, only the considered set of equations reduces to the Gardner-Ostrovsky equation which is applicable only within a finite range of wavenumbers. We derive the nonlinear Schrödinger equation (NLSE) which describes the evolution of narrow-band wave-trains and show that within a more general bi-directional equation the wave-trains, similar to that derived from the Ostrovsky equation, are also modulationally stable at relatively small wavenumbers k < kc and unstable at k > kc, where kc is some critical wavenumber. The NLSE derived here has a wider range of applicability: it is valid for arbitrarily small wavenumbers. We present the analysis of coefficients of the NLSE for different signs of coefficients of the governing equation and compare them with those derived from the Ostrovsky equation. The analysis shows that for weakly dispersive waves in the range of parameters where the Gardner-Ostrovsky equation is valid, the cubic nonlinearity does not contribute to the nonlinear coefficient of NLSE; therefore, the NLSE can be correctly derived from the Ostrovsky equation.
Nikitenkova, S; Singh, N; Stepanyants, Y
2015-12-01
In this paper, we revisit the problem of modulation stability of quasi-monochromatic wave-trains propagating in a media with the double dispersion occurring both at small and large wavenumbers. We start with the shallow-water equations derived by Shrira [Izv., Acad. Sci., USSR, Atmos. Ocean. Phys. (Engl. Transl.) 17, 55-59 (1981)] which describes both surface and internal long waves in a rotating fluid. The small-scale (Boussinesq-type) dispersion is assumed to be weak, whereas the large-scale (Coriolis-type) dispersion is considered as without any restriction. For unidirectional waves propagating in one direction, only the considered set of equations reduces to the Gardner-Ostrovsky equation which is applicable only within a finite range of wavenumbers. We derive the nonlinear Schrödinger equation (NLSE) which describes the evolution of narrow-band wave-trains and show that within a more general bi-directional equation the wave-trains, similar to that derived from the Ostrovsky equation, are also modulationally stable at relatively small wavenumbers k < kc and unstable at k > kc, where kc is some critical wavenumber. The NLSE derived here has a wider range of applicability: it is valid for arbitrarily small wavenumbers. We present the analysis of coefficients of the NLSE for different signs of coefficients of the governing equation and compare them with those derived from the Ostrovsky equation. The analysis shows that for weakly dispersive waves in the range of parameters where the Gardner-Ostrovsky equation is valid, the cubic nonlinearity does not contribute to the nonlinear coefficient of NLSE; therefore, the NLSE can be correctly derived from the Ostrovsky equation.
Chen, Hanbo; Liu, Tao; Zhao, Yu; Zhang, Tuo; Li, Yujie; Li, Meng; Zhang, Hongmiao; Kuang, Hui; Guo, Lei; Tsien, Joe Z; Liu, Tianming
2015-07-15
Tractography based on diffusion tensor imaging (DTI) data has been used as a tool by a large number of recent studies to investigate structural connectome. Despite its great success in offering unique 3D neuroanatomy information, DTI is an indirect observation with limited resolution and accuracy and its reliability is still unclear. Thus, it is essential to answer this fundamental question: how reliable is DTI tractography in constructing large-scale connectome? To answer this question, we employed neuron tracing data of 1772 experiments on the mouse brain released by the Allen Mouse Brain Connectivity Atlas (AMCA) as the ground-truth to assess the performance of DTI tractography in inferring white matter fiber pathways and inter-regional connections. For the first time in the neuroimaging field, the performance of whole brain DTI tractography in constructing a large-scale connectome has been evaluated by comparison with tracing data. Our results suggested that only with the optimized tractography parameters and the appropriate scale of brain parcellation scheme, can DTI produce relatively reliable fiber pathways and a large-scale connectome. Meanwhile, a considerable amount of errors were also identified in optimized DTI tractography results, which we believe could be potentially alleviated by efforts in developing better DTI tractography approaches. In this scenario, our framework could serve as a reliable and quantitative test bed to identify errors in tractography results which will facilitate the development of such novel tractography algorithms and the selection of optimal parameters. PMID:25953631
NASA Astrophysics Data System (ADS)
Shi, Huaitao; Liu, Jianchang; Wu, Yuhou; Zhang, Ke; Zhang, Lixiu; Xue, Peng
2016-04-01
It is pretty significant for fault diagnosis timely and accurately to improve the dependability of industrial processes. In this study, fault diagnosis of nonlinear and large-scale processes by variable-weighted kernel Fisher discriminant analysis (KFDA) based on improved biogeography-based optimisation (IBBO) is proposed, referred to as IBBO-KFDA, where IBBO is used to determine the parameters of variable-weighted KFDA, and variable-weighted KFDA is used to solve the multi-classification overlapping problem. The main contributions of this work are four-fold to further improve the performance of KFDA for fault diagnosis. First, a nonlinear fault diagnosis approach with variable-weighted KFDA is developed for maximising separation between the overlapping fault samples. Second, kernel parameters and features selection of variable-weighted KFDA are simultaneously optimised using IBBO. Finally, a single fitness function that combines erroneous diagnosis rate with feature cost is created, a novel mixed kernel function is introduced to improve the classification capability in the feature space and diagnosis accuracy of the IBBO-KFDA, and serves as the target function in the optimisation problem. Moreover, an IBBO approach is developed to obtain the better quality of solution and faster convergence speed. On the one hand, the proposed IBBO-KFDA method is first used on Tennessee Eastman process benchmark data sets to validate the feasibility and efficiency. On the other hand, IBBO-KFDA is applied to diagnose faults of automation gauge control system. Simulation results demonstrate that IBBO-KFDA can obtain better kernel parameters and feature vectors with a lower computing cost, higher diagnosis accuracy and a better real-time capacity.
Strategic optimization of large-scale vertical closed-loop shallow geothermal systems
NASA Astrophysics Data System (ADS)
Hecht-Méndez, J.; de Paly, M.; Beck, M.; Blum, P.; Bayer, P.
2012-04-01
Vertical closed-loop geothermal systems or ground source heat pump (GSHP) systems with multiple vertical borehole heat exchangers (BHEs) are attractive technologies that provide heating and cooling to large facilities such as hotels, schools, big office buildings or district heating systems. Currently, the worldwide number of installed systems shows a recurrent increase. By running arrays of multiple BHEs, the energy demand of a given facility is fulfilled by exchanging heat with the ground. Due to practical and technical reasons, square arrays of the BHEs are commonly used and the total energy extraction from the subsurface is accomplished by an equal operation of each BHE. Moreover, standard designing practices disregard the presence of groundwater flow. We present a simulation-optimization approach that is able to regulate the individual operation of multiple BHEs, depending on the given hydro-geothermal conditions. The developed approach optimizes the overall performance of the geothermal system while mitigating the environmental impact. As an example, a synthetic case with a geothermal system using 25 BHEs for supplying a seasonal heating energy demand is defined. The optimization approach is evaluated for finding optimal energy extractions for 15 scenarios with different specific constant groundwater flow velocities. Ground temperature development is simulated using the optimal energy extractions and contrasted against standard application. It is demonstrated that optimized systems always level the ground temperature distribution and generate smaller subsurface temperature changes than non-optimized ones. Mean underground temperature changes within the studied BHE field are between 13% and 24% smaller when the optimized system is used. By applying the optimized energy extraction patterns, the temperature of the heat carrier fluid in the BHE, which controls the overall performance of the system, can also be raised by more than 1 °C.
NASA Astrophysics Data System (ADS)
Bechtold, M.; Tiemeyer, B.; Laggner, A.; Leppelt, T.; Frahm, E.; Belting, S.
2014-04-01
Fluxes of the three main greenhouse gases (GHG) CO2, CH4 and N2O from peat and other organic soils are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new dataset comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip well specific long-term annual mean water level (WL) as well as a transformed form of it (WLt). The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insights into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and that predictors with
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
NASA Astrophysics Data System (ADS)
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Large-Scale Multi-Objective Optimization for the Management of Seawater Intrusion, Santa Barbara, CA
NASA Astrophysics Data System (ADS)
Stanko, Z. P.; Nishikawa, T.; Paulinski, S. R.
2015-12-01
The City of Santa Barbara, located in coastal southern California, is concerned that excessive groundwater pumping will lead to chloride (Cl) contamination of its groundwater system from seawater intrusion (SWI). In addition, the city wishes to estimate the effect of continued pumping on the groundwater basin under a variety of initial and climatic conditions. A SEAWAT-based groundwater-flow and solute-transport model of the Santa Barbara groundwater basin was optimized to produce optimal pumping schedules assuming 5 different scenarios. Borg, a multi-objective genetic algorithm, was coupled with the SEAWAT model to identify optimal management strategies. The optimization problems were formulated as multi-objective so that the tradeoffs between maximizing pumping, minimizing SWI, and minimizing drawdowns can be examined by the city. Decisions can then be made on a pumping schedule in light of current preferences and climatic conditions. Borg was used to produce Pareto optimal results for all 5 scenarios, which vary in their initial conditions (high water levels, low water levels, or current basin state), simulated climate (normal or drought conditions), and problem formulation (objective equations and decision-variable aggregation). Results show mostly well-defined Pareto surfaces with a few singularities. Furthermore, the results identify the precise pumping schedule per well that was suitable given the desired restriction on drawdown and Cl concentrations. A system of decision-making is then possible based on various observations of the basin's hydrologic states and climatic trends without having to run any further optimizations. In addition, an assessment of selected Pareto-optimal solutions was analyzed with sensitivity information using the simulation model alone. A wide range of possible groundwater pumping scenarios is available and depends heavily on the future climate scenarios and the Pareto-optimal solution selected while managing the pumping wells.
NASA Astrophysics Data System (ADS)
Bechtold, M.; Tiemeyer, B.; Laggner, A.; Leppelt, T.; Frahm, E.; Belting, S.
2014-09-01
Fluxes of the three main greenhouse gases (GHG) CO2, CH4 and N2O from peat and other soils with high organic carbon contents are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new data set comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip-well-specific long-term annual mean water level (WL) as well as a transformed form (WLt). The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insight into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and
Taming of the Slew: Optimization of the Large Scale X-Ray Surveys with Observing Strategy
NASA Technical Reports Server (NTRS)
Ptak, Andrew
2010-01-01
We will discuss simulations intended to address the relative efficiency of observing large areas with a slew observing strategy as opposed to pointing at fields individually. We will emphasize observing with the Wide Field X-ray Telescope (WFXT) but will also discuss optimization of observing strategy with the IXO Wide-Field Imager (WFI) and eRosita. The slew survey simulation is being implemented by translating the point direction along an arbitrary direction which addresses the impact of smoothing the telescope response during a given slew. However the simulation software is being designed to also allow the visibility of the sky to also be incorporated, in which case long-term observing plans could be developed to optimize the total sky coverage at a given depth and spatial resolution.
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2016-07-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
2016-07-26
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
NASA Astrophysics Data System (ADS)
Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.
2011-08-01
This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.
Optimization of culture media for large-scale lutein production by heterotrophic Chlorella vulgaris.
Jeon, Jin Young; Kwon, Ji-Sue; Kang, Soon Tae; Kim, Bo-Ra; Jung, Yuchul; Han, Jae Gap; Park, Joon Hyun; Hwang, Jae Kwan
2014-01-01
Lutein is a carotenoid with a purported role in protecting eyes from oxidative stress, particularly the high-energy photons of blue light. Statistical optimization was performed to growth media that supports a higher production of lutein by heterotrophically cultivated Chlorella vulgaris. The effect of media composition of C. vulgaris on lutein was examined using fractional factorial design (FFD) and central composite design (CCD). The results indicated that the presence of magnesium sulfate, EDTA-2Na, and trace metal solution significantly affected lutein production. The optimum concentrations for lutein production were found to be 0.34 g/L, 0.06 g/L, and 0.4 mL/L for MgSO4 ·7H2 O, EDTA-2Na, and trace metal solution, respectively. These values were validated using a 5-L jar fermenter. Lutein concentration was increased by almost 80% (139.64 ± 12.88 mg/L to 252.75 ± 12.92 mg/L) after 4 days. Moreover, the lutein concentration was not reduced as the cultivation was scaled up to 25,000 L (260.55 ± 3.23 mg/L) and 240,000 L (263.13 ± 2.72 mg/L). These observations suggest C. vulgaris as a potential lutein source.
Large scale optimization of beam weights under dose-volume restrictions.
Langer, M; Brown, R; Urie, M; Leong, J; Stracher, M; Shapiro, J
1990-04-01
The problem of choosing weights for beams in a multifield plan which maximizes tumor dose under conditions that recognize the volume dependence of organ tolerance to radiation is considered, and its solution described. Structures are modelled as collections of discrete points, and the weighting problem described as a combinatorial linear program (LP). The combinatorial LP is solved as a mixed 0/1 integer program with appropriate restrictions on normal tissue dose. The method is illustrated through the assignment of weights to a set of 10 beams incident on a pelvic target. Dose-volume restrictions are placed on surrounding bowel, bladder, and rectum, and a limit placed on tumor dose inhomogeneity. Different tolerance restrictions are examined, so that the sensitivity of the target dose to changes in the normal tissue constraints may be explored. It is shown that the distributions obtained satisfy the posed constraints. The technique permits formal solution of the optimization problem, in a time short enough to meet the needs of treatment planners. PMID:2323977
Weighted modularity optimization for crisp and fuzzy community detection in large-scale networks
NASA Astrophysics Data System (ADS)
Cao, Jie; Bu, Zhan; Gao, Guangliang; Tao, Haicheng
2016-11-01
Community detection is a classic and very difficult task in the field of complex network analysis, principally for its applications in domains such as social or biological networks analysis. One of the most widely used technologies for community detection in networks is the maximization of the quality function known as modularity. However, existing work has proved that modularity maximization algorithms for community detection may fail to resolve communities in small size. Here we present a new community detection method, which is able to find crisp and fuzzy communities in undirected and unweighted networks by maximizing weighted modularity. The algorithm derives new edge weights using the cosine similarity in order to go around the resolution limit problem. Then a new local moving heuristic based on weighted modularity optimization is proposed to cluster the updated network. Finally, the set of potentially attractive clusters for each node is computed, to further uncover the crisply fuzzy partition of the network. We give demonstrative applications of the algorithm to a set of synthetic benchmark networks and six real-world networks and find that it outperforms the current state of the art proposals (even those aimed at finding overlapping communities) in terms of quality and scalability.
Optimization of culture media for large-scale lutein production by heterotrophic Chlorella vulgaris.
Jeon, Jin Young; Kwon, Ji-Sue; Kang, Soon Tae; Kim, Bo-Ra; Jung, Yuchul; Han, Jae Gap; Park, Joon Hyun; Hwang, Jae Kwan
2014-01-01
Lutein is a carotenoid with a purported role in protecting eyes from oxidative stress, particularly the high-energy photons of blue light. Statistical optimization was performed to growth media that supports a higher production of lutein by heterotrophically cultivated Chlorella vulgaris. The effect of media composition of C. vulgaris on lutein was examined using fractional factorial design (FFD) and central composite design (CCD). The results indicated that the presence of magnesium sulfate, EDTA-2Na, and trace metal solution significantly affected lutein production. The optimum concentrations for lutein production were found to be 0.34 g/L, 0.06 g/L, and 0.4 mL/L for MgSO4 ·7H2 O, EDTA-2Na, and trace metal solution, respectively. These values were validated using a 5-L jar fermenter. Lutein concentration was increased by almost 80% (139.64 ± 12.88 mg/L to 252.75 ± 12.92 mg/L) after 4 days. Moreover, the lutein concentration was not reduced as the cultivation was scaled up to 25,000 L (260.55 ± 3.23 mg/L) and 240,000 L (263.13 ± 2.72 mg/L). These observations suggest C. vulgaris as a potential lutein source. PMID:24550199
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network
Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models
NASA Astrophysics Data System (ADS)
Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.
2012-12-01
The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).
NASA Astrophysics Data System (ADS)
Gu, Zhiping
This paper extends Riccati transfer matrix method to the transient and stability analysis of large scale rotor-bearing systems with strong nonlinear elements, and proposes a mode summation-transfer matrix method, in which the field transfer matrix of a distributed mass uniform shaft segment is obtained with the aid of the idea of mode summation and Newmark beta formulation, and the Riccati transfer matrix method is adopted to stablize the boundary value problem of the nonlinear systems. In this investigation, the real nonlinearity of the strong nonlinear elements is considered, not linearized, and the advantages of the Riccati transfer matrix are retained. So, this method is especially applicable to analyze the transient response and stability of large-scale rotor-bear systems with strong nonlinear elements. One example, a single-spool rotating system with strong nonlinear elements, is given. The obtained results show that this method is superior to that of Gu and Chen (1990) in accuracy, stability, and economy.
Kang, Chao; Wen, Ting-Chi; Kang, Ji-Chuan; Meng, Ze-Bing; Li, Guang-Rong; Hyde, Kevin D
2014-01-01
Cordycepin is one of the most important bioactive compounds produced by species of Cordyceps sensu lato, but it is hard to produce large amounts of this substance in industrial production. In this work, single factor design, Plackett-Burman design, and central composite design were employed to establish the key factors and identify optimal culture conditions which improved cordycepin production. Using these culture conditions, a maximum production of cordycepin was 2008.48 mg/L for 700 mL working volume in the 1000 mL glass jars and total content of cordycepin reached 1405.94 mg/bottle. This method provides an effective way for increasing the cordycepin production at a large scale. The strategies used in this study could have a wide application in other fermentation processes. PMID:25054182
NASA Astrophysics Data System (ADS)
Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.
2014-11-01
During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 x 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large-scale river routing models. The method consists in applying a data assimilation approach, the extended Kalman filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA (Interactions between Soil, Biosphere, and Atmosphere)-TRIP (Total Runoff Integrating Pathways) continental hydrologic system. Parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which leads to significant errors at reach and large scales. The current study focuses on the Niger Basin, a transboundary river. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an observing system simulation experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning coefficients are then supposed to be known and are used to generate synthetic SWOT observations over the period 2002-2003. The impact of the assimilation system on the Niger Basin hydrological cycle is then quantified. The optimization of the Manning coefficient using the EKF (extended Kalman filter) algorithm over an 18-month period led to a significant improvement of the river water levels. The relative bias of the water level is globally improved (a 30
NASA Astrophysics Data System (ADS)
Weitnauer, C.; Beck, C.; Jacobeit, J.
2013-12-01
In the last decades the critical increase of the emission of air pollutants like nitrogen dioxide, sulfur oxides and particulate matter especially in urban areas has become a problem for the environment as well as human health. Several studies confirm a risk of high concentration episodes of particulate matter with an aerodynamic diameter < 10 μm (PM10) for the respiratory tract or cardiovascular diseases. Furthermore it is known that local meteorological and large scale atmospheric conditions are important influencing factors on local PM10 concentrations. With climate changing rapidly, these connections need to be better understood in order to provide estimates of climate change related consequences for air quality management purposes. For quantifying the link between large-scale atmospheric conditions and local PM10 concentrations circulation- and weather type classifications are used in a number of studies by using different statistical approaches. Thus far only few systematic attempts have been made to modify consisting or to develop new weather- and circulation type classifications in order to improve their ability to resolve local PM10 concentrations. In this contribution existing weather- and circulation type classifications, performed on daily 2.5 x 2.5 gridded parameters of the NCEP/NCAR reanalysis data set, are optimized with regard to their discriminative power for local PM10 concentrations at 49 Bavarian measurement sites for the period 1980 to 2011. Most of the PM10 stations are situated in urban areas covering urban background, traffic and industry related pollution regimes. The range of regimes is extended by a few rural background stations. To characterize the correspondence between the PM10 measurements of the different stations by spatial patterns, a regionalization by an s-mode principal component analysis is realized on the high-pass filtered data. The optimization of the circulation- and weather types is implemented using two representative
NASA Astrophysics Data System (ADS)
Tyralis, Hristos; Karakatsanis, Georgios; Tzouka, Katerina; Mamassis, Nikos
2015-04-01
The Greek electricity system is examined for the period 2002-2014. The demand load data are analysed at various time scales (hourly, daily, seasonal and annual) and they are related to the mean daily temperature and the gross domestic product (GDP) of Greece for the same time period. The prediction of energy demand, a product of the Greek Independent Power Transmission Operator, is also compared with the demand load. Interesting results about the change of the electricity demand scheme after the year 2010 are derived. This change is related to the decrease of the GDP, during the period 2010-2014. The results of the analysis will be used in the development of an energy forecasting system which will be a part of a framework for optimal planning of a large-scale hybrid renewable energy system in which hydropower plays the dominant role. Acknowledgement: This research was funded by the Greek General Secretariat for Research and Technology through the research project Combined REnewable Systems for Sustainable ENergy DevelOpment (CRESSENDO; grant number 5145)
NASA Astrophysics Data System (ADS)
Corbin, Charles D.
Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite a growing interest in residential demand management. This work explores the potential for residential buildings to shape electric demand at the distribution feeder level in order to reduce peak demand, reduce system ramping, and increase load factor using detailed sub-hourly simulations of thousands of buildings coupled to distribution power flow software. More generally, this work develops a methodology for the directed optimization of residential HVAC operation using a distributed but directed MPC scheme that can be applied to today's programmable thermostat technologies to address the increasing variability in electric supply and demand. Case studies incorporating varying levels of renewable energy generation demonstrate the approach and highlight important considerations for large-scale residential model predictive control.
Dhaher, Y Y
2001-02-01
The purpose of this study is to present a general mathematical framework to compute a set of feedback matrices which stabilize an unstable nonlinear anthropomorphic musculoskeletal dynamic model. This method is activity specific and involves four fundamental stages. First, from muscle activation data (input) and motion degrees-of-freedom (output) a dynamic experimental model is obtained using system identification schemes. Second, a nonlinear musculoskeletal dynamic model which contains the same number of muscles and degrees-of-freedom and best represents the activity being considered is proposed. Third, the nonlinear musculoskeletal model (anthropomorphic model) is replaced by a family of linear systems, parameterized by the same set of input/output data (nominal points) used in the identification of the experimental model. Finally, a set of stabilizing output feedback matrices, parameterized again by the same set of nominal points, is computed such that when combined with the anthropomorphic model, the combined system resembles the structural form of the experimental model. The method is illustrated in regard to the human squat activity. PMID:11264866
Dhaher, Y Y
2001-02-01
The purpose of this study is to present a general mathematical framework to compute a set of feedback matrices which stabilize an unstable nonlinear anthropomorphic musculoskeletal dynamic model. This method is activity specific and involves four fundamental stages. First, from muscle activation data (input) and motion degrees-of-freedom (output) a dynamic experimental model is obtained using system identification schemes. Second, a nonlinear musculoskeletal dynamic model which contains the same number of muscles and degrees-of-freedom and best represents the activity being considered is proposed. Third, the nonlinear musculoskeletal model (anthropomorphic model) is replaced by a family of linear systems, parameterized by the same set of input/output data (nominal points) used in the identification of the experimental model. Finally, a set of stabilizing output feedback matrices, parameterized again by the same set of nominal points, is computed such that when combined with the anthropomorphic model, the combined system resembles the structural form of the experimental model. The method is illustrated in regard to the human squat activity.
NASA Astrophysics Data System (ADS)
Martin, Elly; Treeby, Bradley E.
2015-10-01
To increase the effectiveness of high intensity focused ultrasound (HIFU) treatments, prediction of ultrasound propagation in biological tissues is essential, particularly where bones are present in the field. This requires complex full-wave computational models which account for nonlinearity, absorption, and heterogeneity. These models must be properly validated but there is a lack of analytical solutions which apply in these conditions. Experimental validation of the models is therefore essential. However, accurate measurement of HIFU fields is not trivial. Our aim is to establish rigorous methods for obtaining reference data sets with which to validate tissue realistic simulations of ultrasound propagation. Here, we present preliminary measurements which form an initial validation of simulations performed using the k-Wave MATLAB toolbox. Acoustic pressure was measured on a plane in the field of a focused ultrasound transducer in free field conditions to be used as a Dirichlet boundary condition for simulations. Rectangular and wedge shaped olive oil scatterers were placed in the field and further pressure measurements were made in the far field for comparison with simulations. Good qualitative agreement was observed between the measured and simulated nonlinear pressure fields.
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Chang, Ju-Ming
2010-05-01
This article presents a novel parallel multi-swarm optimization (PMSO) algorithm with the aim of enhancing the search ability of standard single-swarm PSOs for global optimization of very large-scale multimodal functions. Different from the existing multi-swarm structures, the multiple swarms work in parallel, and the search space is partitioned evenly and dynamically assigned in a weighted manner via the roulette wheel selection (RWS) mechanism. This parallel, distributed framework of the PMSO algorithm is developed based on a master-slave paradigm, which is implemented on a cluster of PCs using message passing interface (MPI) for information interchange among swarms. The PMSO algorithm handles multiple swarms simultaneously and each swarm performs PSO operations of its own independently. In particular, one swarm is designated for global search and the others are for local search. The first part of the experimental comparison is made among the PMSO, standard PSO, and two state-of-the-art algorithms (CTSS and CLPSO) in terms of various un-rotated and rotated benchmark functions taken from the literature. In the second part, the proposed multi-swarm algorithm is tested on large-scale multimodal benchmark functions up to 300 dimensions. The results of the PMSO algorithm show great promise in solving high-dimensional problems.
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
NASA Astrophysics Data System (ADS)
Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.
2014-04-01
During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 m × 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large scale river routing models which are typically employed in Land Surface Models (LSM) for global scale applications. The method consists in applying a data assimilation approach, the Extended Kalman Filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA-TRIP Continental Hydrologic System. Indeed, parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which might have locally significant errors. The current study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries. In addition, geopolitical issues in this region can restrict the exchange of hydrological data, so that SWOT should help improve this situation by making hydrological data freely available. In a previous study, the model was first evaluated against in-situ and satellite derived data sets within the framework of the international African Monsoon Multi-disciplinary Analysis (AMMA) project. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an Observing System Simulation Experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning
Jiang, Chaowei; Wu, S. T.; Hu, Qiang; Feng, Xueshang E-mail: wus@uah.edu E-mail: fengx@spaceweather.ac.cn
2014-05-10
Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength ≲ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.
Liu, Peng; Wang, Huihui; Wu, Rengmao; Yang, Yang; Zhang, Yaqin; Zheng, Zhenrong; Li, Haifeng; Liu, Xu
2013-06-10
In many applications, the emitting light from light-emitting diodes (LEDs) with different colors needs to be mixed together on a large-scale plane, and this illumination mode is usually generated with a diffuser. Abandoning the traditional methods, we proposed an LED color-mixing method that can produce both high color uniformity and irradiance uniformity illumination. This method is composed of two main aspects: arrangement of the irradiance array and design of the LED lens. With this method, an independent rectangular irradiance distribution is generated by each lens unit, and the large-scale color uniform illumination is obtained by arraying the irradiance distribution. A 3×3 array of LED module units consisting of 36 LED lens units with four different colors is designed, and a desired result with high color uniformity is obtained. PMID:23759848
Liu, Peng; Wang, Huihui; Wu, Rengmao; Yang, Yang; Zhang, Yaqin; Zheng, Zhenrong; Li, Haifeng; Liu, Xu
2013-06-10
In many applications, the emitting light from light-emitting diodes (LEDs) with different colors needs to be mixed together on a large-scale plane, and this illumination mode is usually generated with a diffuser. Abandoning the traditional methods, we proposed an LED color-mixing method that can produce both high color uniformity and irradiance uniformity illumination. This method is composed of two main aspects: arrangement of the irradiance array and design of the LED lens. With this method, an independent rectangular irradiance distribution is generated by each lens unit, and the large-scale color uniform illumination is obtained by arraying the irradiance distribution. A 3×3 array of LED module units consisting of 36 LED lens units with four different colors is designed, and a desired result with high color uniformity is obtained.
NASA Astrophysics Data System (ADS)
Uritsky, V. M.; Davila, J. M.; Jones, S. I.
2014-12-01
Solar Probe Plus and Solar Orbiter will provide detailed measurements in the inner heliosphere magnetically connected with the topologically complex and eruptive solar corona. Interpretation of these measurements will require accurate reconstruction of the large-scale coronal magnetic field. In a related presentation by S. Jones et al., we argue that such reconstruction can be performed using photospheric extrapolation methods constrained by white-light coronagraph images. Here, we present the image-processing component of this project dealing with an automated segmentation of fan-like coronal loop structures. In contrast to the existing segmentation codes designed for detecting small-scale closed loops in the vicinity of active regions, we focus on the large-scale geometry of the open-field coronal features observed at significant radial distances from the solar surface. The coronagraph images used for the loop segmentation are transformed into a polar coordinate system and undergo radial detrending and initial noise reduction. The preprocessed images are subject to an adaptive second order differentiation combining radial and azimuthal directions. An adjustable thresholding technique is applied to identify candidate coronagraph features associated with the large-scale coronal field. A blob detection algorithm is used to extract valid features and discard noisy data pixels. The obtained features are interpolated using higher-order polynomials which are used to derive empirical directional constraints for magnetic field extrapolation procedures based on photospheric magnetograms.
Newton Methods for Large Scale Problems in Machine Learning
ERIC Educational Resources Information Center
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo
2015-01-01
A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses. PMID:25915529
Rizk, Mohamed Abdo; El-Sayed, Shimaa Abd El-Salam; Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo
2015-01-01
A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses.
Rizk, Mohamed Abdo; El-Sayed, Shimaa Abd El-Salam; Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo
2015-01-01
A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses. PMID:25915529
Ramamurthy, Byravamurthy
2014-05-05
In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.
Harder, Nathalie; Batra, Richa; Diessl, Nicolle; Gogolin, Sina; Eils, Roland; Westermann, Frank; König, Rainer; Rohr, Karl
2015-06-01
Computational approaches for automatic analysis of image-based high-throughput and high-content screens are gaining increased importance to cope with the large amounts of data generated by automated microscopy systems. Typically, automatic image analysis is used to extract phenotypic information once all images of a screen have been acquired. However, also in earlier stages of large-scale experiments image analysis is important, in particular, to support and accelerate the tedious and time-consuming optimization of the experimental conditions and technical settings. We here present a novel approach for automatic, large-scale analysis and experimental optimization with application to a screen on neuroblastoma cell lines. Our approach consists of cell segmentation, tracking, feature extraction, classification, and model-based error correction. The approach can be used for experimental optimization by extracting quantitative information which allows experimentalists to optimally choose and to verify the experimental parameters. This involves systematically studying the global cell movement and proliferation behavior. Moreover, we performed a comprehensive phenotypic analysis of a large-scale neuroblastoma screen including the detection of rare division events such as multi-polar divisions. Major challenges of the analyzed high-throughput data are the relatively low spatio-temporal resolution in conjunction with densely growing cells as well as the high variability of the data. To account for the data variability we optimized feature extraction and classification, and introduced a gray value normalization technique as well as a novel approach for automatic model-based correction of classification errors. In total, we analyzed 4,400 real image sequences, covering observation periods of around 120 h each. We performed an extensive quantitative evaluation, which showed that our approach yields high accuracies of 92.2% for segmentation, 98.2% for tracking, and 86.5% for
A Framework for Parallel Nonlinear Optimization by Partitioning Localized Constraints
Xu, You; Chen, Yixin
2008-06-28
We present a novel parallel framework for solving large-scale continuous nonlinear optimization problems based on constraint partitioning. The framework distributes constraints and variables to parallel processors and uses an existing solver to handle the partitioned subproblems. In contrast to most previous decomposition methods that require either separability or convexity of constraints, our approach is based on a new constraint partitioning theory and can handle nonconvex problems with inseparable global constraints. We also propose a hypergraph partitioning method to recognize the problem structure. Experimental results show that the proposed parallel algorithm can efficiently solve some difficult test cases.
NASA Technical Reports Server (NTRS)
Doolin, B. F.
1975-01-01
Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.
Optimal nonlinear guidance for a reentry vehicle
NASA Astrophysics Data System (ADS)
Harel, D.; Guelman, M.
Using the exact nonlinear equations of motion an optimal guidance law for a reentry vehicle to achieve at impact a zero miss and a predefined flight path angle is derived. The application of the optimal guidance law in feedback form is based on the on-line solution of a nonlinear algebraic equation. Numerical results are presented.
Meda, Shashwath A.; Giuliani, Nicole R.; Calhoun, Vince D.; Jagannathan, Kanchana; Schretlen, David J.; Pulver, Anne; Cascella, Nicola; Keshavan, Matcheri; Kates, Wendy; Buchanan, Robert; Sharma, Tonmoy; Pearlson, Godfrey D.
2008-01-01
Background Many studies have employed voxel-based morphometry (VBM) of MRI images as an automated method of investigating cortical gray matter differences in schizophrenia. However, results from these studies vary widely, likely due to different methodological or statistical approaches. Objective To use VBM to investigate gray matter differences in schizophrenia in a sample significantly larger than any published to date, and to increase statistical power sufficiently to reveal differences missed in smaller analyses. Methods Magnetic resonance whole brain images were acquired from four geographic sites, all using the same model 1.5T scanner and software version, and combined to form a sample of 200 patients with both first episode and chronic schizophrenia and 200 healthy controls, matched for age, gender and scanner location. Gray matter concentration was assessed and compared using optimized VBM. Results Compared to the healthy controls, schizophrenia patients showed significantly less gray matter concentration in multiple cortical and subcortical regions, some previously unreported. Overall, we found lower concentrations of gray matter in regions identified in prior studies, most of which reported only subsets of the affected areas. Conclusions Gray matter differences in schizophrenia are most comprehensively elucidated using a large, diverse and representative sample. PMID:18378428
Prabakaran, G; Hoti, S L
2008-05-01
Reduction of water activity in the formulations of mosquito biocontrol agent, Bacillus thuringiensis var. israelensis is very important for long term and successful storage. A protocol for spray drying of B. thuringiensis var. israelensis was developed through optimizing parameters such as inlet temperature and atomization type. A indigenous isolate of B. thuringiensis var. israelensis (VCRC B-17) was dried by freeze and spray drying methods and the moisture content and mosquito larvicidal activity of materials produced by the two methods were compared. The larvicidal activity was checked against early fourth instars Aedes aegypti larvae. Results showed that the freeze-dried powders retained the larvicidal activity fairly well. The spray-dried powder moderately lost its larvicidal activity at different inlet temperatures. Between the two types of atomization, centrifugal atomization retained more activity than the nozzle type atomization. Optimum inlet temperature for both centrifugal and nozzle atomization was 160 degrees C. Keeping the outlet temperature constant at 70 degrees C the moisture contents for the spray-dried powders through centrifugal atomization and freeze-dried powders were 10.23% and 11.80%, respectively. The LC(50) values for the spray-dried and freeze-dried powders were 17.42 and 16.18 ng/mL, respectively. Spore count of materials before drying was 3 x 10(10) cfu/mL and after spray drying through nozzle and centrifugal atomization at inlet and outlet temperature of 160 degrees C/70 degrees C were 2.6 x 10(9) and 5.0 x 10(9) cfu/mL, respectively.
Large-scale sequential quadratic programming algorithms
Eldersveld, S.K.
1992-09-01
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.
NASA Astrophysics Data System (ADS)
Moghaddam, M.; Silva, A.; Entekhabi, D.; Castillo, A. E.; Liu, M.; Burgin, M.; Goykhman, Y.
2011-12-01
We develop energy-efficient wireless sensor network technologies and data analysis techniques for dynamic and near-real-time validation of space-borne soil moisture measurements, in particular those from the Soil Moisture Active and Passive (SMAP) mission. Soil moisture fields are functions of variables that change over time across the range of scales from a few meters to several kilometers, necessitating the deployment of an extensive in-situ network for validation of coarse-resolution retrievals of soil moisture from SMAP and other remote sensing data. Previously we have reported on the scheduling and placement strategies for achieving optimal spatial and temporal sampling by the network. This work focuses on the latest developments of the large-scale wireless sensor network architecture that we have termed the Ripple architecture, and in particular, its latest version Ripple-2. The new network architecture solves many of the previous problems encountered during field deployments of the SoilSCAPE network, including reliability and scalability. The new architecture will be described, along with the results of the latest field deployments at the University of Michigan Matthaei botanical gardens and at the representative field site in Canton, Oklahoma. The status of the large-scale deployment at the Tonzi Ranch in central California will also be given. Additionally, the latest results of hydrologic and radar landscape simulators will also be presented, highlighting the connection between the SoilSCAPE network data, remote sensing retrievals, and the target science application of SMAP validation.
You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; Liu, Yilu
2016-01-12
This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less
Structural optimization for nonlinear dynamic response.
Dou, Suguang; Strachan, B Scott; Shaw, Steven W; Jensen, Jakob S
2015-09-28
Much is known about the nonlinear resonant response of mechanical systems, but methods for the systematic design of structures that optimize aspects of these responses have received little attention. Progress in this area is particularly important in the area of micro-systems, where nonlinear resonant behaviour is being used for a variety of applications in sensing and signal conditioning. In this work, we describe a computational method that provides a systematic means for manipulating and optimizing features of nonlinear resonant responses of mechanical structures that are described by a single vibrating mode, or by a pair of internally resonant modes. The approach combines techniques from nonlinear dynamics, computational mechanics and optimization, and it allows one to relate the geometric and material properties of structural elements to terms in the normal form for a given resonance condition, thereby providing a means for tailoring its nonlinear response. The method is applied to the fundamental nonlinear resonance of a clamped-clamped beam and to the coupled mode response of a frame structure, and the results show that one can modify essential normal form coefficients by an order of magnitude by relatively simple changes in the shape of these elements. We expect the proposed approach, and its extensions, to be useful for the design of systems used for fundamental studies of nonlinear behaviour as well as for the development of commercial devices that exploit nonlinear behaviour.
Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao
Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
Nonlinear optimization for stochastic simulations.
Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.
2003-12-01
This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.
NASA Astrophysics Data System (ADS)
Alliss, R.; Apling, D.; Kiley, H.; Mason, M.
2011-12-01
The United States Government has an ambitious goal of growing renewable energy from 1% to 20% by 2030. Two key challenges exist in order to realize this target: Creating system-level approaches to overall generation capacity expansion and integration, including difficult policy changes, and addressing the variability issues of wind and solar generation. These challenges are addressed using MORE Power (Maximizing and Optimizing Renewable Energy), a system level planning tool designed to optimize the placement of wind and solar sites to maximize high quality, useable power. This planning tool uses historical, high resolution, measurements of wind and solar parameters along with a unique, non-linear, optimization algorithm to optimize the placement of sites given a set of user specified input parameters. MORE Power is quantifying the real value of transmission as an enabler to aggregate diverse variable resources which in turn is incentivizing transmission developers to expand the grid. In addition, the issue of grid stability becomes even more critical as larger deployment of renewable resources come online. MORE Power is identifying the benefits of larger balancing areas as an enabler for greater stability and therefore a reduced need to keep transmission capacity in reserve. In the end, by addressing and minimizing the impacts of the natural variability of wind and solar, a reduction in price volatility results which favorably impacts the consumer. This presentation will show examples of how MORE Power is being used to address the variability issue of renewables in order to achieve the 20% deployment target by 2030.
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
Optimization of nonlinear aeroelastic tailoring criteria
NASA Technical Reports Server (NTRS)
Abdi, F.; Ide, H.; Shankar, V. J.; Sobieszczanski-Sobieski, J.
1988-01-01
A static flexible fighter aircraft wing configuration is presently addressed by a multilevel optimization technique, based on both a full-potential concept and a rapid structural optimization program, which can be applied to such aircraft-design problems as maneuver load control, aileron reversal, and lift effectiveness. It is found that nonlinearities are important in the design of an aircraft whose flight envelope encompasses the transonic regime, and that the present structural suboptimization produces a significantly lighter wing by reducing ply thicknesses.
GGOPT: an unconstrained non-linear optimizer.
Bassingthwaighte, J B; Chan, I S; Goldstein, A A; Russak, I B
1988-01-01
GGOPT is a derivative-free non-linear optimizer for smooth functions with added noise. If the function values arise from observations or from extensive computations, these errors can be considerable. GGOPT uses an adjustable mesh together with linear least squares to find smoothed values of the function, gradient and Hessian at the center of the mesh. These values drive a descent method that estimates optimal parameters. The smoothed values usually result in increased accuracy.
Optimization approaches to nonlinear model predictive control
Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)
1991-01-01
With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.
Optimal singular control for nonlinear semistabilisation
NASA Astrophysics Data System (ADS)
L'Afflitto, Andrea; Haddad, Wassim M.
2016-06-01
The singular optimal control problem for asymptotic stabilisation has been extensively studied in the literature. In this paper, the optimal singular control problem is extended to address a weaker version of closed-loop stability, namely, semistability, which is of paramount importance for consensus control of network dynamical systems. Three approaches are presented to address the nonlinear semistable singular control problem. Namely, a singular perturbation method is presented to construct a state-feedback singular controller that guarantees closed-loop semistability for nonlinear systems. In this approach, we show that for a non-negative cost-to-go function the minimum cost of a nonlinear semistabilising singular controller is lower than the minimum cost of a singular controller that guarantees asymptotic stability of the closed-loop system. In the second approach, we solve the nonlinear semistable singular control problem by using the cost-to-go function to cancel the singularities in the corresponding Hamilton-Jacobi-Bellman equation. For this case, we show that the minimum value of the singular performance measure is zero. Finally, we provide a framework based on the concepts of state-feedback linearisation and feedback equivalence to solve the singular control problem for semistabilisation of nonlinear dynamical systems. For this approach, we also show that the minimum value of the singular performance measure is zero. Three numerical examples are presented to demonstrate the efficacy of the proposed singular semistabilisation frameworks.
Optimized spectral estimation for nonlinear synchronizing systems
NASA Astrophysics Data System (ADS)
Sommerlade, Linda; Mader, Malenka; Mader, Wolfgang; Timmer, Jens; Thiel, Marco; Grebogi, Celso; Schelter, Björn
2014-03-01
In many fields of research nonlinear dynamical systems are investigated. When more than one process is measured, besides the distinct properties of the individual processes, their interactions are of interest. Often linear methods such as coherence are used for the analysis. The estimation of coherence can lead to false conclusions when applied without fulfilling several key assumptions. We introduce a data driven method to optimize the choice of the parameters for spectral estimation. Its applicability is demonstrated based on analytical calculations and exemplified in a simulation study. We complete our investigation with an application to nonlinear tremor signals in Parkinson's disease. In particular, we analyze electroencephalogram and electromyogram data.
Optimal Parametric Feedback Excitation of Nonlinear Oscillators
NASA Astrophysics Data System (ADS)
Braun, David J.
2016-01-01
An optimal parametric feedback excitation principle is sought, found, and investigated. The principle is shown to provide an adaptive resonance condition that enables unprecedentedly robust movement generation in a large class of oscillatory dynamical systems. Experimental demonstration of the theory is provided by a nonlinear electronic circuit that realizes self-adaptive parametric excitation without model information, signal processing, and control computation. The observed behavior dramatically differs from the one achievable using classical parametric modulation, which is fundamentally limited by uncertainties in model information and nonlinear effects inevitably present in real world applications.
Optimal Parametric Feedback Excitation of Nonlinear Oscillators.
Braun, David J
2016-01-29
An optimal parametric feedback excitation principle is sought, found, and investigated. The principle is shown to provide an adaptive resonance condition that enables unprecedentedly robust movement generation in a large class of oscillatory dynamical systems. Experimental demonstration of the theory is provided by a nonlinear electronic circuit that realizes self-adaptive parametric excitation without model information, signal processing, and control computation. The observed behavior dramatically differs from the one achievable using classical parametric modulation, which is fundamentally limited by uncertainties in model information and nonlinear effects inevitably present in real world applications. PMID:26871336
Nonlinear Brightness Optimization in Compton Scattering
Hartemann, Fred V.; Wu, Sheldon S. Q.
2013-07-26
In Compton scattering light sources, a laser pulse is scattered by a relativistic electron beam to generate tunable x and gamma rays. Because of the inhomogeneous nature of the incident radiation, the relativistic Lorentz boost of the electrons is modulated by the ponderomotive force during the interaction, leading to intrinsic spectral broadening and brightness limitations. We discuss these effects, along with an optimization strategy to properly balance the laser bandwidth, diffraction, and nonlinear ponderomotive force.
NASA Astrophysics Data System (ADS)
Alisic, L.; Gurnis, M.; Stadler, G.; Burstedde, C.; Wilcox, L. C.; Ghattas, O.
2009-12-01
A full understanding of the dynamics of plate motions requires numerical models with a realistic, nonlinear rheology and a mesh resolution sufficiently high to resolve large variations in viscosity over short length scales. We suspect that resolutions as fine as 1 km locally in global models of the whole mantle and lithosphere are necessary. We use the adaptive mesh mantle convection code Rhea to model convection in the mantle with plates in both regional and global domains. Rhea is a new generation parallel finite element mantle convection code designed to scale to hundreds of thousands of compute cores. It uses forest-of-octree-based adaptive meshes via the p4est library. With Rhea's adaptive capabilities we can create local resolution down to ~ 1 km around plate boundaries, while keeping the mesh at a much coarser resolution away from small features. The global models in this study have approximately 160 million elements, a reduction of ~ 2000x compared to a uniform mesh of the same high resolution. The unprecedented resolution in these global models allows us, for the first time, to resolve viscous dissipation in the bending plate as well as observe the trade-off between this process and the strength of slabs and the resistance of dipping thrust faults. Since plate velocities and 'plateness' are dynamic outcomes of numerical modeling, we must carefully incorporate both the full buoyancy field and the details of all plate boundaries at a fine scale. The global models were constructed with detailed maps of the age of the plates and a thermal model of the seismicity-defined slabs which grades into the more diffuse buoyancy resolved with tomography. In the regional models, the thermal model consists of plates following a halfspace cooling model, and slabs for which buoyancy is conserved at every depth. A composite formulation of Newtonian and non-Newtonian rheology along with yielding is implemented; plate boundaries are modeled as very narrow weak zones. Plate
Sensitivity technologies for large scale simulation.
Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard
2005-01-01
order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version
Traveltime tomography and nonlinear constrained optimization
Berryman, J.G.
1988-10-01
Fermat's principle of least traveltime states that the first arrivals follow ray paths with the smallest overall traveltime from the point of transmission to the point of reception. This principle determines a definite convex set of feasible slowness models - depending only on the traveltime data - for the fully nonlinear traveltime inversion problem. The existence of such a convex set allows us to transform the inversion problem into a nonlinear constrained optimization problem. Fermat's principle also shows that the standard undamped least-squares solution to the inversion problem always produces a slowness model with many ray paths having traveltime shorter than the measured traveltime (an impossibility even if the trial ray paths are not the true ray paths). In a damped least-squares inversion, the damping parameter may be varied to allow efficient location of a slowness model on the feasibility boundary. 13 refs., 1 fig., 1 tab.
Nonlinear simulations to optimize magnetic nanoparticle hyperthermia
Reeves, Daniel B. Weaver, John B.
2014-03-10
Magnetic nanoparticle hyperthermia is an attractive emerging cancer treatment, but the acting microscopic energy deposition mechanisms are not well understood and optimization suffers. We describe several approximate forms for the characteristic time of Néel rotations with varying properties and external influences. We then present stochastic simulations that show agreement between the approximate expressions and the micromagnetic model. The simulations show nonlinear imaginary responses and associated relaxational hysteresis due to the field and frequency dependencies of the magnetization. This suggests that efficient heating is possible by matching fields to particles instead of resorting to maximizing the power of the applied magnetic fields.
Nonlinear Global Optimization Using Curdling Algorithm
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less
Nonlinear optimization simplified by hypersurface deformation
Stillinger, F.H.; Weber, T.A.
1988-09-01
A general strategy is advanced for simplifying nonlinear optimization problems, the ant-lion method. This approach exploits shape modifications of the cost-function hypersurface which distend basins surrounding low-lying minima (including global minima). By intertwining hypersurface deformations with steepest-descent displacements, the search is concentrated on a small relevant subset of all minima. Specific calculations demonstrating the value of this method are reported for the partitioning of two classes of irregular but nonrandom graphs, the prime-factor graphs and the pi graphs. We also indicate how this approach can be applied to the traveling salesman problem and to design layout optimization, and that it may be useful in combination with simulated annealing strategies.
Large-scale instabilities of helical flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2016-10-01
Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.
Inverting magnetic meridian data using nonlinear optimization
NASA Astrophysics Data System (ADS)
Connors, Martin; Rostoker, Gordon
2015-09-01
A nonlinear optimization algorithm coupled with a model of auroral current systems allows derivation of physical parameters from data and is the basis of a new inversion technique. We refer to this technique as automated forward modeling (AFM), with the variant used here being automated meridian modeling (AMM). AFM is applicable on scales from regional to global, yielding simple and easily understood output, and using only magnetic data with no assumptions about electrodynamic parameters. We have found the most useful output parameters to be the total current and the boundaries of the auroral electrojet on a meridian densely populated with magnetometers, as derived by AMM. Here, we describe application of AFM nonlinear optimization to magnetic data and then describe the use of AMM to study substorms with magnetic data from ground meridian chains as input. AMM inversion results are compared to optical data, results from other inversion methods, and field-aligned current data from AMPERE. AMM yields physical parameters meaningful in describing local electrodynamics and is suitable for ongoing monitoring of activity. The relation of AMM model parameters to equivalent currents is discussed, and the two are found to compare well if the field-aligned currents are far from the inversion meridian.
Very Large Scale Integration (VLSI).
ERIC Educational Resources Information Center
Yeaman, Andrew R. J.
Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…
Galaxy clustering on large scales.
Efstathiou, G
1993-06-01
I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe.
Matching trajectory optimization and nonlinear tracking control for HALE
NASA Astrophysics Data System (ADS)
Lee, Sangjong; Jang, Jieun; Ryu, Hyeok; Lee, Kyun Ho
2014-11-01
This paper concerns optimal trajectory generation and nonlinear tracking control for stratospheric airship platform of VIA-200. To compensate for the mismatch between the point-mass model of optimal trajectory and the 6-DOF model of the nonlinear tracking problem, a new matching trajectory optimization approach is proposed. The proposed idea reduces the dissimilarity of both problems and reduces the uncertainties in the nonlinear equations of motion for stratospheric airship. In addition, its refined optimal trajectories yield better results under jet stream conditions during flight. The resultant optimal trajectories of VIA-200 are full three-dimensional ascent flight trajectories reflecting the realistic constraints of flight conditions and airship performance with and without a jet stream. Finally, 6-DOF nonlinear equations of motion are derived, including a moving wind field, and the vectorial backstepping approach is applied. The desirable tracking performance is demonstrated that application of the proposed matching optimization method enables the smooth linkage of trajectory optimization to tracking control problems.
Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R
2013-01-01
This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results. PMID:24334896
Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R
2013-01-01
This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.
Optimal second order sliding mode control for nonlinear uncertain systems.
Das, Madhulika; Mahanta, Chitralekha
2014-07-01
In this paper, a chattering free optimal second order sliding mode control (OSOSMC) method is proposed to stabilize nonlinear systems affected by uncertainties. The nonlinear optimal control strategy is based on the control Lyapunov function (CLF). For ensuring robustness of the optimal controller in the presence of parametric uncertainty and external disturbances, a sliding mode control scheme is realized by combining an integral and a terminal sliding surface. The resulting second order sliding mode can effectively reduce chattering in the control input. Simulation results confirm the supremacy of the proposed optimal second order sliding mode control over some existing sliding mode controllers in controlling nonlinear systems affected by uncertainty.
Guaranteed robustness properties of multivariable nonlinear stochastic optimal regulators
NASA Technical Reports Server (NTRS)
Tsitsiklis, J. N.; Athans, M.
1984-01-01
The robustness of optimal regulators for nonlinear, deterministic and stochastic, multi-input dynamical systems is studied under the assumption that all state variables can be measured. It is shown that, under mild assumptions, such nonlinear regulators have a guaranteed infinite gain margin; moreover, they have a guaranteed 50 percent gain reduction margin and a 60 degree phase margin, in each feedback channel, provided that the system is linear in the control and the penalty to the control is quadratic, thus extending the well-known properties of LQ regulators to nonlinear optimal designs. These results are also valid for infinite horizon, average cost, stochastic optimal control problems.
Guaranteed robustness properties of multivariable, nonlinear, stochastic optimal regulators
NASA Technical Reports Server (NTRS)
Tsitsiklis, J. N.; Athans, M.
1983-01-01
The robustness of optimal regulators for nonlinear, deterministic and stochastic, multi-input dynamical systems is studied under the assumption that all state variables can be measured. It is shown that, under mild assumptions, such nonlinear regulators have a guaranteed infinite gain margin; moreover, they have a guaranteed 50 percent gain reduction margin and a 60 degree phase margin, in each feedback channel, provided that the system is linear in the control and the penalty to the control is quadratic, thus extending the well-known properties of LQ regulators to nonlinear optimal designs. These results are also valid for infinite horizon, average cost, stochastic optimal control problems.
On a Highly Nonlinear Self-Obstacle Optimal Control Problem
Di Donato, Daniela; Mugnai, Dimitri
2015-10-15
We consider a non-quadratic optimal control problem associated to a nonlinear elliptic variational inequality, where the obstacle is the control itself. We show that, fixed a desired profile, there exists an optimal solution which is not far from it. Detailed characterizations of the optimal solution are given, also in terms of approximating problems.
Lyapunov optimal feedback control of a nonlinear inverted pendulum
NASA Technical Reports Server (NTRS)
Grantham, W. J.; Anderson, M. J.
1989-01-01
Liapunov optimal feedback control is applied to a nonlinear inverted pendulum in which the control torque was constrained to be less than the nonlinear gravity torque in the model. This necessitates a control algorithm which 'rocks' the pendulum out of its potential wells, in order to stabilize it at a unique vertical position. Simulation results indicate that a preliminary Liapunov feedback controller can successfully overcome the nonlinearity and bring almost all trajectories to the target.
EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE
Bruni, Marco; Hidalgo, Juan Carlos; Wands, David
2014-10-10
We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.
Arrieta-Camacho, Juan José; Biegler, Lorenz T
2005-12-01
Real time optimal guidance is considered for a class of low thrust spacecraft. In particular, nonlinear model predictive control (NMPC) is utilized for computing the optimal control actions required to transfer a spacecraft from a low Earth orbit to a mission orbit. The NMPC methodology presented is able to cope with unmodeled disturbances. The dynamics of the transfer are modeled using a set of modified equinoctial elements because they do not exhibit singularities for zero inclination and zero eccentricity. The idea behind NMPC is the repeated solution of optimal control problems; at each time step, a new control action is computed. The optimal control problem is solved using a direct method-fully discretizing the equations of motion. The large scale nonlinear program resulting from the discretization procedure is solved using IPOPT--a primal-dual interior point algorithm. Stability and robustness characteristics of the NMPC algorithm are reviewed. A numerical example is presented that encourages further development of the proposed methodology: the transfer from low-Earth orbit to a molniya orbit. PMID:16510409
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
NASA Astrophysics Data System (ADS)
Diwadkar, Amit; Vaidya, Umesh
2016-04-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies.
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
Genetic Algorithm Based Neural Networks for Nonlinear Optimization
1994-09-28
This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmore » the energy surface.« less
Large scale cluster computing workshop
Dane Skow; Alan Silverman
2002-12-23
Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.
Large Scale Magnetostrictive Valve Actuator
NASA Technical Reports Server (NTRS)
Richard, James A.; Holleman, Elizabeth; Eddleman, David
2008-01-01
Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.
Implicit solution of large-scale radiation diffusion problems
Brown, P N; Graziani, F; Otero, I; Woodward, C S
2001-01-04
In this paper, we present an efficient solution approach for fully implicit, large-scale, nonlinear radiation diffusion problems. The fully implicit approach is compared to a semi-implicit solution method. Accuracy and efficiency are shown to be better for the fully implicit method on both one- and three-dimensional problems with tabular opacities taken from the LEOS opacity library.
Simulations of Large Scale Structures in Cosmology
NASA Astrophysics Data System (ADS)
Liao, Shihong
Large-scale structures are powerful probes for cosmology. Due to the long range and non-linear nature of gravity, the formation of cosmological structures is a very complicated problem. The only known viable solution is cosmological N-body simulations. In this thesis, we use cosmological N-body simulations to study structure formation, particularly dark matter haloes' angular momenta and dark matter velocity field. The origin and evolution of angular momenta is an important ingredient for the formation and evolution of haloes and galaxies. We study the time evolution of the empirical angular momentum - mass relation for haloes to offer a more complete picture about its origin, dependences on cosmological models and nonlinear evolutions. We also show that haloes follow a simple universal specific angular momentum profile, which is useful in modelling haloes' angular momenta. The dark matter velocity field will become a powerful cosmological probe in the coming decades. However, theoretical predictions of the velocity field rely on N-body simulations and thus may be affected by numerical artefacts (e.g. finite box size, softening length and initial conditions). We study how such numerical effects affect the predicted pairwise velocities, and we propose a theoretical framework to understand and correct them. Our results will be useful for accurately comparing N-body simulations to observational data of pairwise velocities.
Large Scale Nanolaminate Deformable Mirror
Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K
2005-11-30
This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.
Large-Scale Information Systems
D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura
2000-12-01
Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.
Liu, Derong; Wang, Ding; Li, Hongliang
2014-02-01
In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme. PMID:24807039
On optimal nonlinear estimation. I - Continuous observation.
NASA Technical Reports Server (NTRS)
Lo, J. T.
1973-01-01
A generalization of Bucy's (1965) representation theorem is obtained under very weak hypotheses. The generalized theorem is shown to play the same role in the case of general optimal estimation for an arbitrary random process as does the Bucy theorem in the case of optimal filtering for a diffusion process. At least for the models considered, the possibility is pointed out to reduce all sequential estimation problems to the problem of filtering. Hence, filtering theory is seen to represent the core of estimation theory, and is believed to define the direction in which future research should be focused.
Asynchronous parallel pattern search for nonlinear optimization
P. D. Hough; T. G. Kolda; V. J. Torczon
2000-01-01
Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems
Optimal state discrimination and unstructured search in nonlinear quantum mechanics
NASA Astrophysics Data System (ADS)
Childs, Andrew M.; Young, Joshua
2016-02-01
Nonlinear variants of quantum mechanics can solve tasks that are impossible in standard quantum theory, such as perfectly distinguishing nonorthogonal states. Here we derive the optimal protocol for distinguishing two states of a qubit using the Gross-Pitaevskii equation, a model of nonlinear quantum mechanics that arises as an effective description of Bose-Einstein condensates. Using this protocol, we present an algorithm for unstructured search in the Gross-Pitaevskii model, obtaining an exponential improvement over a previous algorithm of Meyer and Wong. This result establishes a limitation on the effectiveness of the Gross-Pitaevskii approximation. More generally, we demonstrate similar behavior under a family of related nonlinearities, giving evidence that the ability to quickly discriminate nonorthogonal states and thereby solve unstructured search is a generic feature of nonlinear quantum mechanics.
Nonlinear optimization with linear constraints using a projection method
NASA Technical Reports Server (NTRS)
Fox, T.
1982-01-01
Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.
A relativistic signature in large-scale structure
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
Supporting large-scale computational science
Musick, R., LLNL
1998-02-19
Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.
Route Monopolie and Optimal Nonlinear Pricing
NASA Technical Reports Server (NTRS)
Tournut, Jacques
2003-01-01
To cope with air traffic growth and congested airports, two solutions are apparent on the supply side: 1) use larger aircraft in the hub and spoke system; or 2) develop new routes through secondary airports. An enlarged route system through secondary airports may increase the proportion of route monopolies in the air transport market.The monopoly optimal non linear pricing policy is well known in the case of one dimension (one instrument, one characteristic) but not in the case of several dimensions. This paper explores the robustness of the one dimensional screening model with respect to increasing the number of instruments and the number of characteristics. The objective of this paper is then to link and fill the gap in both literatures. One of the merits of the screening model has been to show that a great varieD" of economic questions (non linear pricing, product line choice, auction design, income taxation, regulation...) could be handled within the same framework.VCe study a case of non linear pricing (2 instruments (2 routes on which the airline pro_ddes customers with services), 2 characteristics (demand of services on these routes) and two values per characteristic (low and high demand of services on these routes)) and we show that none of the conclusions of the one dimensional analysis remain valid. In particular, upward incentive compatibility constraint may be binding at the optimum. As a consequence, they may be distortion at the top of the distribution. In addition to this, we show that the optimal solution often requires a kind of form of bundling, we explain explicitly distortions and show that it is sometimes optimal for the monopolist to only produce one good (instead of two) or to exclude some buyers from the market. Actually, this means that the monopolist cannot fully apply his monopoly power and is better off selling both goods independently.We then define all the possible solutions in the case of a quadratic cost function for a uniform
NASA Technical Reports Server (NTRS)
Liu, J. T. C.
1988-01-01
The physical problem of large-scale coherent structures in real, developing free turbulent shear flows are discussed from the point of view of a broader interpretation of the nonlinear aspects of hydrodynamic stability. Variations on the Amsden and Harlow problem are considered, and the role of linear theory in nonlinear problems is addressed. Spatially developing two-dimensional coherent structures and three-dimensional nonlinear effects in large-scale coherent mode interactions are considered.
NASA Astrophysics Data System (ADS)
Liu, J. T. C.
The physical problem of large-scale coherent structures in real, developing free turbulent shear flows are discussed from the point of view of a broader interpretation of the nonlinear aspects of hydrodynamic stability. Variations on the Amsden and Harlow problem are considered, and the role of linear theory in nonlinear problems is addressed. Spatially developing two-dimensional coherent structures and three-dimensional nonlinear effects in large-scale coherent mode interactions are considered.
Fully localised nonlinear energy growth optimals in pipe flow
NASA Astrophysics Data System (ADS)
Pringle, Chris C. T.; Willis, Ashley P.; Kerswell, Rich R.
2015-06-01
A new, fully localised, energy growth optimal is found over large times and in long pipe domains at a given mass flow rate. This optimal emerges at a threshold disturbance energy below which a nonlinear version of the known (streamwise-independent) linear optimal [P. J. Schmid and D. S. Henningson, "Optimal energy density growth in Hagen-Poiseuille flow," J. Fluid Mech. 277, 192-225 (1994)] is selected and appears to remain the optimal up until the critical energy at which transition is triggered. The form of this optimal is similar to that found in short pipes [Pringle et al., "Minimal seeds for shear flow turbulence: Using nonlinear transient growth to touch the edge of chaos," J. Fluid Mech. 702, 415-443 (2012)], but now with full localisation in the streamwise direction. This fully localised optimal perturbation represents the best approximation yet of the minimal seed (the smallest perturbation which is arbitrarily close to states capable of triggering a turbulent episode) for "real" (laboratory) pipe flows. Dependence of the optimal with respect to several parameters has been computed and establishes that the structure is robust.
Fully localised nonlinear energy growth optimals in pipe flow
Pringle, Chris C. T.; Willis, Ashley P.; Kerswell, Rich R.
2015-06-15
A new, fully localised, energy growth optimal is found over large times and in long pipe domains at a given mass flow rate. This optimal emerges at a threshold disturbance energy below which a nonlinear version of the known (streamwise-independent) linear optimal [P. J. Schmid and D. S. Henningson, “Optimal energy density growth in Hagen-Poiseuille flow,” J. Fluid Mech. 277, 192–225 (1994)] is selected and appears to remain the optimal up until the critical energy at which transition is triggered. The form of this optimal is similar to that found in short pipes [Pringle et al., “Minimal seeds for shear flow turbulence: Using nonlinear transient growth to touch the edge of chaos,” J. Fluid Mech. 702, 415–443 (2012)], but now with full localisation in the streamwise direction. This fully localised optimal perturbation represents the best approximation yet of the minimal seed (the smallest perturbation which is arbitrarily close to states capable of triggering a turbulent episode) for “real” (laboratory) pipe flows. Dependence of the optimal with respect to several parameters has been computed and establishes that the structure is robust.
Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru
2015-01-01
Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well.
Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru
2015-01-01
Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well. PMID:26421005
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
Optimal bipedal interactions with dynamic terrain: synthesis and analysis via nonlinear programming
NASA Astrophysics Data System (ADS)
Hubicki, Christian; Goldman, Daniel; Ames, Aaron
In terrestrial locomotion, gait dynamics and motor control behaviors are tuned to interact efficiently and stably with the dynamics of the terrain (i.e. terradynamics). This controlled interaction must be particularly thoughtful in bipeds, as their reduced contact points render them highly susceptible to falls. While bipedalism under rigid terrain assumptions is well-studied, insights for two-legged locomotion on soft terrain, such as sand and dirt, are comparatively sparse. We seek an understanding of how biological bipeds stably and economically negotiate granular media, with an eye toward imbuing those abilities in bipedal robots. We present a trajectory optimization method for controlled systems subject to granular intrusion. By formulating a large-scale nonlinear program (NLP) with reduced-order resistive force theory (RFT) models and jamming cone dynamics, the optimized motions are informed and shaped by the dynamics of the terrain. Using a variant of direct collocation methods, we can express all optimization objectives and constraints in closed-form, resulting in rapid solving by standard NLP solvers, such as IPOPT. We employ this tool to analyze emergent features of bipedal locomotion in granular media, with an eye toward robotic implementation.
Lagrangian space consistency relation for large scale structure
Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lh399@columbia.edu
2015-09-01
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.
Automating large-scale reactor systems
Kisner, R.A.
1985-01-01
This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Craig Loehle, Ph. D.
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmore » not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.« less
Design of Life Extending Controls Using Nonlinear Parameter Optimization
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Holmes, Michael S.; Ray, Asok
1998-01-01
This report presents the conceptual development of a life extending control system where the objective is to achieve high performance and structural durability of the plant. A life extending controller is designed for a reusable rocket engine via damage mitigation in both the fuel and oxidizer turbines while achieving high performance for transient responses of the combustion chamber pressure and the O2/H2 mixture ratio. This design approach makes use of a combination of linear and nonlinear controller synthesis techniques and also allows adaptation of the life extending controller module to augment a conventional performance controller of a rocket engine. The nonlinear aspect of the design is achieved using nonlinear parameter optimization of a prescribed control structure.
Large scale mechanical metamaterials as seismic shields
NASA Astrophysics Data System (ADS)
Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.
2016-08-01
Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.
Continuation and bifurcation analysis of large-scale dynamical systems with LOCA.
Salinger, Andrew Gerhard; Phipps, Eric Todd; Pawlowski, Roger Patrick
2010-06-01
Dynamical systems theory provides a powerful framework for understanding the behavior of complex evolving systems. However applying these ideas to large-scale dynamical systems such as discretizations of multi-dimensional PDEs is challenging. Such systems can easily give rise to problems with billions of dynamical variables, requiring specialized numerical algorithms implemented on high performance computing architectures with thousands of processors. This talk will describe LOCA, the Library of Continuation Algorithms, a suite of scalable continuation and bifurcation tools optimized for these types of systems that is part of the Trilinos software collection. In particular, we will describe continuation and bifurcation analysis techniques designed for large-scale dynamical systems that are based on specialized parallel linear algebra methods for solving augmented linear systems. We will also discuss several other Trilinos tools providing nonlinear solvers (NOX), eigensolvers (Anasazi), iterative linear solvers (AztecOO and Belos), preconditioners (Ifpack, ML, Amesos) and parallel linear algebra data structures (Epetra and Tpetra) that LOCA can leverage for efficient and scalable analysis of large-scale dynamical systems.
OPT++: An object-oriented class library for nonlinear optimization
Meza, J.C.
1994-03-01
Object-oriented programming is becoming a popular way of developing new software. The promise of this new programming paradigm is that software developed through these concepts will be more reliable and easier to re-use, thereby decreasing the time and cost of the software development cycle. This report describes the development of a C++ class library for nonlinear optimization. Using object-oriented techniques, this new library was designed so that the interface is easy to use while being general enough so that new optimization algorithms can be added easily to the existing framework.
Global nonlinear optimization of spacecraft protective structures design
NASA Technical Reports Server (NTRS)
Mog, R. A.; Lovett, J. N., Jr.; Avans, S. L.
1990-01-01
The global optimization of protective structural designs for spacecraft subject to hypervelocity meteoroid and space debris impacts is presented. This nonlinear problem is first formulated for weight minimization of the space station core module configuration using the Nysmith impact predictor. Next, the equivalence and uniqueness of local and global optima is shown using properties of convexity. This analysis results in a new feasibility condition for this problem. The solution existence is then shown, followed by a comparison of optimization techniques. Finally, a sensitivity analysis is presented to determine the effects of variations in the systemic parameters on optimal design. The results show that global optimization of this problem is unique and may be achieved by a number of methods, provided the feasibility condition is satisfied. Furthermore, module structural design thicknesses and weight increase with increasing projectile velocity and diameter and decrease with increasing separation between bumper and wall for the Nysmith predictor.
Passive and Active Vibrations Allow Self-Organization in Large-Scale Electromechanical Systems
NASA Astrophysics Data System (ADS)
Buscarino, Arturo; Fortuna, Carlo Famoso Luigi; Frasca, Mattia
2016-06-01
In this paper, the role of passive and active vibrations for the control of nonlinear large-scale electromechanical systems is investigated. The mathematical model of the system is discussed and detailed experimental results are shown in order to prove that coupling the effects of feedback and vibrations elicited by proper control signals makes possible to regularize imperfect uncertain large-scale systems.
Is the universe homogeneous on large scale?
NASA Astrophysics Data System (ADS)
Zhu, Xingfen; Chu, Yaoquan
Wether the distribution of matter in the universe is homogeneous or fractal on large scale is vastly debated in observational cosmology recently. Pietronero and his co-workers have strongly advocated that the fractal behaviour in the galaxy distribution extends to the largest scale observed (≍1000h-1Mpc) with the fractal dimension D ≍ 2. Most cosmologists who hold the standard model, however, insist that the universe be homogeneous on large scale. The answer of whether the universe is homogeneous or not on large scale should wait for the new results of next generation galaxy redshift surveys.
Large-scale regions of antimatter
Grobov, A. V. Rubin, S. G.
2015-07-15
Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
Optimal analytic method for the nonlinear Hasegawa-Mima equation
NASA Astrophysics Data System (ADS)
Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle
2014-05-01
The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
Topology optimization for nonlinear dynamic problems: Considerations for automotive crashworthiness
NASA Astrophysics Data System (ADS)
Kaushik, Anshul; Ramani, Anand
2014-04-01
Crashworthiness of automotive structures is most often engineered after an optimal topology has been arrived at using other design considerations. This study is an attempt to incorporate crashworthiness requirements upfront in the topology synthesis process using a mathematically consistent framework. It proposes the use of equivalent linear systems from the nonlinear dynamic simulation in conjunction with a discrete-material topology optimizer. Velocity and acceleration constraints are consistently incorporated in the optimization set-up. Issues specific to crash problems due to the explicit solution methodology employed, nature of the boundary conditions imposed on the structure, etc. are discussed and possible resolutions are proposed. A demonstration of the methodology on two-dimensional problems that address some of the structural requirements and the types of loading typical of frontal and side impact is provided in order to show that this methodology has the potential for topology synthesis incorporating crashworthiness requirements.
Spin glasses and nonlinear constraints in portfolio optimization
NASA Astrophysics Data System (ADS)
Andrecut, M.
2014-01-01
We discuss the portfolio optimization problem with the obligatory deposits constraint. Recently it has been shown that as a consequence of this nonlinear constraint, the solution consists of an exponentially large number of optimal portfolios, completely different from each other, and extremely sensitive to any changes in the input parameters of the problem, making the concept of rational decision making questionable. Here we reformulate the problem using a quadratic obligatory deposits constraint, and we show that from the physics point of view, finding an optimal portfolio amounts to calculating the mean-field magnetizations of a random Ising model with the constraint of a constant magnetization norm. We show that the model reduces to an eigenproblem, with 2N solutions, where N is the number of assets defining the portfolio. Also, in order to illustrate our results, we present a detailed numerical example of a portfolio of several risky common stocks traded on the Nasdaq Market.
Large-scale analysis of unconfined self-similar Rayleigh-Taylor turbulence
NASA Astrophysics Data System (ADS)
Soulard, Olivier; Griffond, Jérôme; Gréa, Benoît-Joseph
2015-09-01
The large-scale properties of unconfined Rayleigh-Taylor turbulence are investigated using an eddy-damped quasi-normal markovianized approximation. Within this framework, turbulent spectra are shown to undergo at late times and at large scales, an evolution dominated by non-linear backscattering processes. As a result, the analysis predicts that large scale initial conditions are eventually forgotten: there is no large scale invariant and no equivalent of a principle of permanence of large eddies. Additional properties of Rayleigh-Taylor large scales are also discussed. In particular, their scaling and anisotropy are examined, with an emphasis put on the combined influence of buoyancy production and non-linearities. The different assumptions and predictions of this work are verified by performing an implicit large eddy simulation of a Rayleigh-Taylor configuration.
Nonlinear Identification Using Orthogonal Forward Regression With Nested Optimal Regularization.
Hong, Xia; Chen, Sheng; Gao, Junbin; Harris, Chris J
2015-12-01
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Large-scale motions in the universe
Rubin, V.C.; Coyne, G.V.
1988-01-01
The present conference on the large-scale motions of the universe discusses topics on the problems of two-dimensional and three-dimensional structures, large-scale velocity fields, the motion of the local group, small-scale microwave fluctuations, ab initio and phenomenological theories, and properties of galaxies at high and low Z. Attention is given to the Pisces-Perseus supercluster, large-scale structure and motion traced by galaxy clusters, distances to galaxies in the field, the origin of the local flow of galaxies, the peculiar velocity field predicted by the distribution of IRAS galaxies, the effects of reionization on microwave background anisotropies, the theoretical implications of cosmological dipoles, and n-body simulations of universe dominated by cold dark matter.
Survey on large scale system control methods
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1987-01-01
The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.
Large-scale nanophotonic phased array.
Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R
2013-01-10
Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.
Nonlinear singularly perturbed optimal control problems with singular arcs
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1977-01-01
A third order, nonlinear, singularly perturbed optimal control problem is considered under assumptions which assure that the full problem is singular and the reduced problem is nonsingular. The separation between the singular arc of the full problem and the optimal control law of the reduced one, both of which are hypersurfaces in state space, is of the same order as the small parameter of the problem. Boundary layer solutions are constructed which are stable and reach the outer solution in a finite time. A uniformly valid composite solution is then formed from the reduced and boundary layer solutions. The value of the approximate solution is that it is relatively easy to obtain and does not involve singular arcs. To illustrate the utility of the results, the technique is used to obtain an approximate solution of a simplified version of the aircraft minimum time-to-climb problem. A numerical example is included.
Moon-based Earth Observation for Large Scale Geoscience Phenomena
NASA Astrophysics Data System (ADS)
Guo, Huadong; Liu, Guang; Ding, Yixing
2016-07-01
The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.
US National Large-scale City Orthoimage Standard Initiative
Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.
2003-01-01
The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.
Handling inequality constraints in continuous nonlinear global optimization
Wang, Tao; Wah, B.W.
1996-12-31
In this paper, we present a new method to handle inequality constraints and apply it in NOVEL (Nonlinear Optimization via External Lead), a system we have developed for solving constrained continuous nonlinear optimization problems. In general, in applying Lagrange-multiplier methods to solve these problems, inequality constraints are first converted into equivalent equality constraints. One such conversion method adds a slack variable to each inequality constraint in order to convert it into an equality constraint. The disadvantage of this conversion is that when the search is inside a feasible region, some satisfied constraints may still pose a non-zero weight in the Lagrangian function, leading to possible oscillations and divergence when a local optimum lies on the boundary of a feasible region. We propose a new conversion method called the MaxQ method such that all satisfied constraints in a feasible region always carry zero weight in the Lagrange function; hence, minimizing the Lagrange function in a feasible region always leads to local minima of the objective function. We demonstrate that oscillations do not happen in our method. We also propose methods to speed up convergence when a local optimum lies on the boundary of a feasible region. Finally, we show improved experimental results in applying our proposed method in NOVEL on some existing benchmark problems and compare them to those obtained by applying the method based on slack variables.
Solving Large-scale Eigenvalue Problems in SciDACApplications
Yang, Chao
2005-06-29
Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended
Time-optimal quantum control of nonlinear two-level systems
NASA Astrophysics Data System (ADS)
Chen, Xi; Ban, Yue; Hegerfeldt, Gerhard C.
2016-08-01
Nonlinear two-level Landau-Zener type equations for systems with relevance for Bose-Einstein condensates and nonlinear optics are considered and the minimal time Tmin to drive an initial state to a given target state is investigated. Surprisingly, the nonlinearity may be canceled by a time-optimal unconstrained driving and Tmin becomes independent of the nonlinearity. For constrained and unconstrained driving explicit expressions are derived for Tmin, the optimal driving, and the protocol.
Sensitivity analysis for large-scale problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
ARPACK: Solving large scale eigenvalue problems
NASA Astrophysics Data System (ADS)
Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao
2013-11-01
ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w
A Large Scale Computer Terminal Output Controller.
ERIC Educational Resources Information Center
Tucker, Paul Thomas
This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…
Management of large-scale technology
NASA Technical Reports Server (NTRS)
Levine, A.
1985-01-01
Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.
Evaluating Large-Scale Interactive Radio Programmes
ERIC Educational Resources Information Center
Potter, Charles; Naidoo, Gordon
2009-01-01
This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…
OPTIMIZE-M. Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Loehle, C.
1997-07-01
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.
NASA Astrophysics Data System (ADS)
Duan, Wansuo; Xue, Feng; Mu, Mu
2009-09-01
We use the approach of conditional nonlinear optimal perturbation (CNOP) to investigate the optimal precursory disturbances in a theoretical El Niño-Southern Oscillation (ENSO) model and then an intermediate model. By exploring the dynamical behaviors of the El Niño events caused by these CNOP-type precursors, a characteristic for this kind of theoretical El Niño events is shown, i.e., the stronger El Niño events tend to decay faster and have shorter durations of the decaying phase. By examining the observed El Niño events, it is found that the Niño-3.4 SSTA are more potential than the Niño-3 SSTA in illustrating the decaying characteristic of the theoretical El Niño events. In particular, it is the Niño-3.4 indices for the strong El Niño events during 1981-2007 that roughly show the decaying characteristic. Based on the physics of the theoretical model, the mechanism responsible for the above decaying characteristic of strong El Niño events is explored. The analysis demonstrates that the property of the stronger El Niño event decaying faster can be realized only through the linear dynamics with the combined effects of the rising of thermocline and the mean upwelling, but that of the stronger El Niño event having a shorter duration of the decaying phase results from a nonlinear mechanism. It is shown that the nonlinearity related to the anomalous temperature advection in the tropical Pacific shortens the duration of the decaying phase for El Niño event. The stronger the El Niño event, the stronger the nonlinearity, then the more considerable the suppressing of the nonlinearity on the duration of the decaying phase for El Niño event. This explains why the stronger El Niño events have shorter durations of the decaying phase. Also, this sheds light on why the observed strong El Niño events are more likely to show this characteristic.
Design optimization of a twist compliant mechanism with nonlinear stiffness
NASA Astrophysics Data System (ADS)
Tummala, Y.; Frecker, M. I.; Wissa, A. A.; Hubbard, J. E., Jr.
2014-10-01
A contact-aided compliant mechanism called a twist compliant mechanism (TCM) is presented in this paper. This mechanism has nonlinear stiffness when it is twisted in both directions along its axis. The inner core of the mechanism is primarily responsible for its flexibility in one twisting direction. The contact surfaces of the cross-members and compliant sectors are primarily responsible for its high stiffness in the opposite direction. A desired twist angle in a given direction can be achieved by tailoring the stiffness of a TCM. The stiffness of a compliant twist mechanism can be tailored by varying thickness of its cross-members, thickness of the core and thickness of its sectors. A multi-objective optimization problem with three objective functions is proposed in this paper, and used to design an optimal TCM with desired twist angle. The objective functions are to minimize the mass and maximum von-Mises stress observed, while minimizing or maximizing the twist angles under specific loading conditions. The multi-objective optimization problem proposed in this paper is solved for an ornithopter flight research platform as a case study, with the goal of using the TCM to achieve passive twisting of the wing during upstroke, while keeping the wing fully extended and rigid during the downstroke. Prototype TCMs have been fabricated using 3D printing and tested. Testing results are also presented in this paper.
Large-scale Advanced Propfan (LAP) program
NASA Technical Reports Server (NTRS)
Sagerser, D. A.; Ludemann, S. G.
1985-01-01
The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.
Fractals and cosmological large-scale structure
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1992-01-01
Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.
Condition Monitoring of Large-Scale Facilities
NASA Technical Reports Server (NTRS)
Hall, David L.
1999-01-01
This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.
Optimization of microscopic and macroscopic second order optical nonlinearities
NASA Technical Reports Server (NTRS)
Marder, Seth R.; Perry, Joseph W.
1993-01-01
Nonlinear optical materials (NLO) can be used to extend the useful frequency range of lasers. Frequency generation is important for laser-based remote sensing and optical data storage. Another NLO effect, the electro-optic effect, can be used to modulate the amplitude, phase, or polarization state of an optical beam. Applications of this effect in telecommunications and in integrated optics include the impression of information on an optical carrier signal or routing of optical signals between fiber optic channels. In order to utilize these effects most effectively, it is necessary to synthesize materials which respond to applied fields very efficiently. In this talk, it will be shown how the development of a fundamental understanding of the science of nonlinear optics can lead to a rational approach to organic molecules and materials with optimized properties. In some cases, figures of merit for newly developed materials are more than an order of magnitude higher than those of currently employed materials. Some of these materials are being examined for phased-array radar and other electro-optic switching applications.
Optimized interpolations and nonlinearity in numerical studies of woodwind instruments
NASA Astrophysics Data System (ADS)
Skouroupathis, Apostolos
2005-04-01
The impedance spectra of woodwind instruments with arbitrary axisymmetric geometry are studied. Piecewise interpolations of the instruments' profile are performed, using interpolating functions amenable to analytic solutions of the Webster equation. Our algorithm optimizes on the choice of such functions, while ensuring compatibility of wave-fronts at the joining points. Employing a standard mathematical model of a single-reed mouthpiece, as well as the time-domain reflection function which is derived from our impedance results, the Schumacher equation is solved for the pressure evolution in time. Analytic checks are made to verify that, despite the nonlinearity in the reed model and in the evolution equation, solutions are unique and singularity-free.
Optimal operating points of oscillators using nonlinear resonators
Kenig, Eyal; Cross, M. C.; Villanueva, L. G.; Karabalin, R. B.; Matheny, M. H.; Lifshitz, Ron; Roukes, M. L.
2013-01-01
We demonstrate an analytical method for calculating the phase sensitivity of a class of oscillators whose phase does not affect the time evolution of the other dynamic variables. We show that such oscillators possess the possibility for complete phase noise elimination. We apply the method to a feedback oscillator which employs a high Q weakly nonlinear resonator and provide explicit parameter values for which the feedback phase noise is completely eliminated and others for which there is no amplitude-phase noise conversion. We then establish an operational mode of the oscillator which optimizes its performance by diminishing the feedback noise in both quadratures, thermal noise, and quality factor fluctuations. We also study the spectrum of the oscillator and provide specific results for the case of 1/f noise sources. PMID:23214857
Large-scale fibre-array multiplexing
Cheremiskin, I V; Chekhlova, T K
2001-05-31
The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)
Large-scale neuromorphic computing systems
NASA Astrophysics Data System (ADS)
Furber, Steve
2016-10-01
Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.
Large-scale neuromorphic computing systems.
Furber, Steve
2016-10-01
Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers. PMID:27529195
Large-Scale Visual Data Analysis
NASA Astrophysics Data System (ADS)
Johnson, Chris
2014-04-01
Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.
Large scale processes in the solar nebula.
NASA Astrophysics Data System (ADS)
Boss, A. P.
Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.
Slow, large scales from fast, small ones in dispersive wave turbulence
NASA Astrophysics Data System (ADS)
Smith, Leslie; Waleffe, Fabian
2000-11-01
Dispersive wave turbulence in systems of geophysical interest (beta-plane, rotating, stratified and rotating-stratified flows) has been simulated with random, isotropic small scale forcing and hyper-viscosity. This can be thought of as a Langevin model of the small space-time scales only with potential implications for climate modeling. In all cases, slow, coherent large scales are generated after long times of 2nd order in the nonlinear time scale. These slow, large scales ultimately dominate the flows. Beta-plane and rotating flow results were reported earlier [PoF 11, 1608]. In stratified flows, the energy accumulates in a 1D vertically sheared flow at selected large scales. As the rotation rate is increased, a progressive transition toward generation of all large scale vortical zero modes (quasi-geostrophic 3D flow) is observed. For yet higher rotation rate, energy accumulates primarily in a 2D quasi-geostrophic flow (cyclonic vortices) at all large scales.
Nonlinearly-constrained optimization using asynchronous parallel generating set search.
Griffin, Joshua D.; Kolda, Tamara Gibson
2007-05-01
Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivative-free optimization of unconstrained and linearly-constrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearly-constrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivative-based optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothed-exact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since many CS&E optimization problems are characterized by expensive function evaluations, reducing the number of function evaluations is paramount, and the results of this paper show that exact and smoothed-exact penalty functions are well-suited to this task.
Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.
2016-01-01
We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157
Nogaret, Alain; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I
2016-01-01
We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157
NASA Astrophysics Data System (ADS)
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
2016-10-01
We study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x-y) averaging, we also demonstrate the presence of large-scale fields when vertical (y-z) averaging is employed instead. By computing space-time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase - a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumber fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode-mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.
A Nonlinear Fuel Optimal Reaction Jet Control Law
Breitfeller, E.; Ng, L.C.
2002-06-30
We derive a nonlinear fuel optimal attitude control system (ACS) that drives the final state to the desired state according to a cost function that weights the final state angular error relative to the angular rate error. Control is achieved by allowing the pulse-width-modulated (PWM) commands to begin and end anywhere within a control cycle, achieving a pulse width pulse time (PWPT) control. We show through a MATLAB{reg_sign} Simulink model that this steady-state condition may be accomplished, in the absence of sensor noise or model uncertainties, with the theoretical minimum number of actuator cycles. The ability to analytically achieve near-zero drift rates is particularly important in applications such as station-keeping and sensor imaging. Consideration is also given to the fact that, for relatively small sensor and model errors, the controller requires significantly fewer actuator cycles to reach the final state error than a traditional proportional-integral-derivative (PID) controller. The optimal PWPT attitude controller may be applicable for a high performance kinetic energy kill vehicle.
Large Scale Bacterial Colony Screening of Diversified FRET Biosensors
Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver
2015-01-01
Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878
Locally Biased Galaxy Formation and Large-Scale Structure
NASA Astrophysics Data System (ADS)
Narayanan, Vijay K.; Berlind, Andreas A.; Weinberg, David H.
2000-01-01
We examine the influence of the morphology-density relation and a wide range of simple models for biased galaxy formation on statistical measures of large-scale structure. We contrast the behavior of local biasing models, in which the efficiency of galaxy formation is determined by the density, geometry, or velocity dispersion of the local mass distribution, with that of nonlocal biasing models, in which galaxy formation is modulated coherently over scales larger than the galaxy correlation length. If morphological segregation of galaxies is governed by a local morphology-density relation, then the correlation function of E/S0 galaxies should be steeper and stronger than that of spiral galaxies on small scales, as observed, while on large scales the E/S0 and spiral galaxies should have correlation functions with the same shape but different amplitudes. Similarly, all of our local bias models produce scale-independent amplification of the correlation function and power spectrum in the linear and mildly nonlinear regimes; only a nonlocal biasing mechanism can alter the shape of the power spectrum on large scales. Moments of the biased galaxy distribution retain the hierarchical pattern of the mass moments, but biasing alters the values and scale dependence of the hierarchical amplitudes S3 and S4. Pair-weighted moments of the galaxy velocity distribution are sensitive to the details of the bias prescription even if galaxies have the same local velocity distribution as the underlying dark matter. The nonlinearity of the relation between galaxy density and mass density depends on the biasing prescription and the smoothing scale, and the scatter in this relation is a useful diagnostic of the physical parameters that determine the bias. While the assumption that galaxy formation is governed by local physics leads to some important simplifications on large scales, even local biasing is a multifaceted phenomenon whose impact cannot be described by a single parameter or
A visualization framework for large-scale virtual astronomy
NASA Astrophysics Data System (ADS)
Fu, Chi-Wing
Motivated by advances in modern positional astronomy, this research attempts to digitally model the entire Universe through computer graphics technology. Our first challenge is space itself. The gigantic size of the Universe makes it impossible to put everything into a typical graphics system at its own scale. The graphics rendering process can easily fail because of limited computational precision, The second challenge is that the enormous amount of data could slow down the graphics; we need clever techniques to speed up the rendering. Third, since the Universe is dominated by empty space, objects are widely separated; this makes navigation difficult. We attempt to tackle these problems through various techniques designed to extend and optimize the conventional graphics framework, including the following: power homogeneous coordinates for large-scale spatial representations, generalized large-scale spatial transformations, and rendering acceleration via environment caching and object disappearance criteria. Moreover, we implemented an assortment of techniques for modeling and rendering a variety of astronomical bodies, ranging from the Earth up to faraway galaxies, and attempted to visualize cosmological time; a method we call the Lightcone representation was introduced to visualize the whole space-time of the Universe at a single glance. In addition, several navigation models were developed to handle the large-scale navigation problem. Our final results include a collection of visualization tools, two educational animations appropriate for planetarium audiences, and state-of-the-art-advancing rendering techniques that can be transferred to practice in digital planetarium systems.
Reliability assessment for components of large scale photovoltaic systems
NASA Astrophysics Data System (ADS)
Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar
2014-10-01
Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.
Large-scale brightenings associated with flares
NASA Technical Reports Server (NTRS)
Mandrini, Cristina H.; Machado, Marcos E.
1992-01-01
It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.
Large scale phononic metamaterials for seismic isolation
Aravantinos-Zafiris, N.; Sigalas, M. M.
2015-08-14
In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.
Large-scale dynamics and global warming
Held, I.M. )
1993-02-01
Predictions of future climate change raise a variety of issues in large-scale atmospheric and oceanic dynamics. Several of these are reviewed in this essay, including the sensitivity of the circulation of the Atlantic Ocean to increasing freshwater input at high latitudes; the possibility of greenhouse cooling in the southern oceans; the sensitivity of monsoonal circulations to differential warming of the two hemispheres; the response of midlatitude storms to changing temperature gradients and increasing water vapor in the atmosphere; and the possible importance of positive feedback between the mean winds and eddy-induced heating in the polar stratosphere.
Neutrinos and large-scale structure
Eisenstein, Daniel J.
2015-07-15
I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.
Experimental Simulations of Large-Scale Collisions
NASA Technical Reports Server (NTRS)
Housen, Kevin R.
2002-01-01
This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.
Large-Scale PV Integration Study
Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris
2011-07-29
This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.
Local gravity and large-scale structure
NASA Technical Reports Server (NTRS)
Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.
1990-01-01
The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.
Discrete-time neural inverse optimal control for nonlinear systems via passivation.
Ornelas-Tellez, Fernando; Sanchez, Edgar N; Loukianov, Alexander G
2012-08-01
This paper presents a discrete-time inverse optimal neural controller, which is constituted by combination of two techniques: 1) inverse optimal control to avoid solving the Hamilton-Jacobi-Bellman equation associated with nonlinear system optimal control and 2) on-line neural identification, using a recurrent neural network trained with an extended Kalman filter, in order to build a model of the assumed unknown nonlinear system. The inverse optimal controller is based on passivity theory. The applicability of the proposed approach is illustrated via simulations for an unstable nonlinear system and a planar robot. PMID:24807528
Statistical analysis of large-scale neuronal recording data
Reed, Jamie L.; Kaas, Jon H.
2010-01-01
Relating stimulus properties to the response properties of individual neurons and neuronal networks is a major goal of sensory research. Many investigators implant electrode arrays in multiple brain areas and record from chronically implanted electrodes over time to answer a variety of questions. Technical challenges related to analyzing large-scale neuronal recording data are not trivial. Several analysis methods traditionally used by neurophysiologists do not account for dependencies in the data that are inherent in multi-electrode recordings. In addition, when neurophysiological data are not best modeled by the normal distribution and when the variables of interest may not be linearly related, extensions of the linear modeling techniques are recommended. A variety of methods exist to analyze correlated data, even when data are not normally distributed and the relationships are nonlinear. Here we review expansions of the Generalized Linear Model designed to address these data properties. Such methods are used in other research fields, and the application to large-scale neuronal recording data will enable investigators to determine the variable properties that convincingly contribute to the variances in the observed neuronal measures. Standard measures of neuron properties such as response magnitudes can be analyzed using these methods, and measures of neuronal network activity such as spike timing correlations can be analyzed as well. We have done just that in recordings from 100-electrode arrays implanted in the primary somatosensory cortex of owl monkeys. Here we illustrate how one example method, Generalized Estimating Equations analysis, is a useful method to apply to large-scale neuronal recordings. PMID:20472395
Simulating the large-scale structure of HI intensity maps
NASA Astrophysics Data System (ADS)
Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel
2016-03-01
Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h < Mhalo < 1013 Msolar / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.
Large scale reconstruction of the solar coronal magnetic field
NASA Astrophysics Data System (ADS)
Amari, T.; Aly, J.-J.; Chopin, P.; Canou, A.; Mikic, Z.
2014-10-01
It is now becoming necessary to access the global magnetic structure of the solar low corona at a large scale in order to understand its physics and more particularly the conditions of energization of the magnetic fields and the multiple connections between distant active regions (ARs) which may trigger eruptive events in an almost coordinated way. Various vector magnetographs, either on board spacecraft or ground-based, currently allow to obtain vector synoptic maps, composite magnetograms made of multiple interactive ARs, and full disk magnetograms. We present a method recently developed for reconstructing the global solar coronal magnetic field as a nonlinear force-free magnetic field in spherical geometry, generalizing our previous results in Cartesian geometry. This method is implemented in the new code XTRAPOLS, which thus appears as an extension of our active region scale code XTRAPOL. We apply our method by performing a reconstruction at a specific time for which we dispose of a set of composite data constituted of a vector magnetogram provided by SDO/HMI, embedded in a larger full disk vector magnetogram provided by the same instrument, finally embedded in a synoptic map provided by SOLIS. It turns out to be possible to access the large scale structure of the corona and its energetic contents, and also the AR scale, at which we recover the presence of a twisted flux rope in equilibrium.
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)
1993-01-01
Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.
Ridzal, Danis
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the areamore » of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.« less
Engineering management of large scale systems
NASA Technical Reports Server (NTRS)
Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.
1989-01-01
The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.
Large scale study of tooth enamel
Bodart, F.; Deconninck, G.; Martin, M.Th.
1981-04-01
Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.
Batteries for Large Scale Energy Storage
Soloveichik, Grigorii L.
2011-07-15
In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.
Large-scale databases of proper names.
Conley, P; Burgess, C; Hage, D
1999-05-01
Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803
Large Scale Quantum Simulations of Nuclear Pasta
NASA Astrophysics Data System (ADS)
Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian
2016-03-01
Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05
Large-scale simulations of reionization
Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder
2005-11-01
We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.
Large scale water lens for solar concentration.
Mondol, A S; Vogel, B; Bastian, G
2015-06-01
Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893
Large scale structures in transitional pipe flow
NASA Astrophysics Data System (ADS)
Hellström, Leo; Ganapathisubramani, Bharathram; Smits, Alexander
2015-11-01
We present a dual-plane snapshot POD analysis of transitional pipe flow at a Reynolds number of 3440, based on the pipe diameter. The time-resolved high-speed PIV data were simultaneously acquired in two planes, a cross-stream plane (2D-3C) and a streamwise plane (2D-2C) on the pipe centerline. The two light sheets were orthogonally polarized, allowing particles situated in each plane to be viewed independently. In the snapshot POD analysis, the modal energy is based on the cross-stream plane, while the POD modes are calculated using the dual-plane data. We present results on the emergence and decay of the energetic large scale motions during transition to turbulence, and compare these motions to those observed in fully developed turbulent flow. Supported under ONR Grant N00014-13-1-0174 and ERC Grant No. 277472.
Challenges in large scale distributed computing: bioinformatics.
Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes
2005-01-01
The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.
The challenge of large-scale structure
NASA Astrophysics Data System (ADS)
Gregory, S. A.
1996-03-01
The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.
Large-Scale Astrophysical Visualization on Smartphones
NASA Astrophysics Data System (ADS)
Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.
2011-07-01
Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.
Large scale water lens for solar concentration.
Mondol, A S; Vogel, B; Bastian, G
2015-06-01
Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.
The XMM Large Scale Structure Survey
NASA Astrophysics Data System (ADS)
Pierre, Marguerite
2005-10-01
We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.
Shock waves in the large scale structure of the universe
NASA Astrophysics Data System (ADS)
Ryu, Dongsu
Cosmological shock waves result from the supersonic flow motions induced by hierarchical formation of nonlinear structures in the universe. Like most astrophysical shocks, they are collisionless shocks which form in the tenuous intergalactic plasma via collective electromagnetic interactions between particles and electromagnetic fields. The gravitational energy released during the structure formation is transferred by these shocks to the intergalactic gas in several different forms. In addition to the gas entropy, cosmic rays are produced via diffusive shock acceleration, magnetic fields are generated via the Biermann battery mechanism and Weibel instability as well as the Bell-Lucek mechanism, and vorticity is generated at curved shocks. Here we review the properties, roles, and consequences of the shock waves in the context of the large scale structure of the universe.
Shock Waves in the Large Scale Structure of the Universe
NASA Astrophysics Data System (ADS)
Ryu, Dongsu
2008-04-01
Cosmological shock waves result from the supersonic flow motions induced by hierarchical formation of nonlinear structures in the universe. Like most astrophysical shocks, they are collisionless shocks which form in the tenuous intergalactic plasma via collective electromagnetic interactions between particles and electromagnetic fields. The gravitational energy released during the structure formation is transferred by these shocks to the intergalactic gas in several different forms: in addition to the gas entropy, cosmic rays are produced via diffusive shock acceleration, magnetic fields are generated via the Biermann battery mechanism and Weibel instability, and vorticity is generated at curved shocks. Here I review the properties, roles, and consequences of the shock waves in the context of the large scale structure of the universe.
Statistics of Caustics in Large-Scale Structure Formation
NASA Astrophysics Data System (ADS)
Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien
2016-10-01
The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.
Large-scale structure non-Gaussianities with modal methods
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel
2016-10-01
Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).
Nonzero Density-Velocity Consistency Relations for Large Scale Structures.
Rizzo, Luca Alberto; Mota, David F; Valageas, Patrick
2016-08-19
We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias. PMID:27588842
Nonzero Density-Velocity Consistency Relations for Large Scale Structures
NASA Astrophysics Data System (ADS)
Rizzo, Luca Alberto; Mota, David F.; Valageas, Patrick
2016-08-01
We present exact kinematic consistency relations for cosmological structures that do not vanish at equal times and can thus be measured in surveys. These rely on cross correlations between the density and velocity, or momentum, fields. Indeed, the uniform transport of small-scale structures by long-wavelength modes, which cannot be detected at equal times by looking at density correlations only, gives rise to a shift in the amplitude of the velocity field that could be measured. These consistency relations only rely on the weak equivalence principle and Gaussian initial conditions. They remain valid in the nonlinear regime and for biased galaxy fields. They can be used to constrain nonstandard cosmological scenarios or the large-scale galaxy bias.
Introducing Large-Scale Innovation in Schools
NASA Astrophysics Data System (ADS)
Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.
2016-08-01
Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.
Supporting large-scale computational science
Musick, R
1998-10-01
A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.
NASA Technical Reports Server (NTRS)
Pavarini, C.
1974-01-01
Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.
Large-scale Ising spin network based on degenerate optical parametric oscillators
NASA Astrophysics Data System (ADS)
Inagaki, Takahiro; Inaba, Kensuke; Hamerly, Ryan; Inoue, Kyo; Yamamoto, Yoshihisa; Takesue, Hiroki
2016-06-01
Solving combinatorial optimization problems is becoming increasingly important in modern society, where the analysis and optimization of unprecedentedly complex systems are required. Many such problems can be mapped onto the ground-state-search problem of the Ising Hamiltonian, and simulating the Ising spins with physical systems is now emerging as a promising approach for tackling such problems. Here, we report a large-scale network of artificial spins based on degenerate optical parametric oscillators (DOPOs), paving the way towards a photonic Ising machine capable of solving difficult combinatorial optimization problems. We generate >10,000 time-division-multiplexed DOPOs using dual-pump four-wave mixing in a highly nonlinear fibre placed in a cavity. Using those DOPOs, a one-dimensional Ising model is simulated by introducing nearest-neighbour optical coupling. We observe the formation of spin domains and find that the domain size diverges near the DOPO threshold, which suggests that the DOPO network can simulate the behaviour of low-temperature Ising spins.
Reconstructing Information in Large-Scale Structure via Logarithmic Mapping
NASA Astrophysics Data System (ADS)
Szapudi, Istvan
We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Large-Scale Statistics for Cu Electromigration
NASA Astrophysics Data System (ADS)
Hauschildt, M.; Gall, M.; Hernandez, R.
2009-06-01
Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a
A Nonlinear Physics-Based Optimal Control Method for Magnetostrictive Actuators
NASA Technical Reports Server (NTRS)
Smith, Ralph C.
1998-01-01
This paper addresses the development of a nonlinear optimal control methodology for magnetostrictive actuators. At moderate to high drive levels, the output from these actuators is highly nonlinear and contains significant magnetic and magnetomechanical hysteresis. These dynamics must be accommodated by models and control laws to utilize the full capabilities of the actuators. A characterization based upon ferromagnetic mean field theory provides a model which accurately quantifies both transient and steady state actuator dynamics under a variety of operating conditions. The control method consists of a linear perturbation feedback law used in combination with an optimal open loop nonlinear control. The nonlinear control incorporates the hysteresis and nonlinearities inherent to the transducer and can be computed offline. The feedback control is constructed through linearization of the perturbed system about the optimal system and is efficient for online implementation. As demonstrated through numerical examples, the combined hybrid control is robust and can be readily implemented in linear PDE-based structural models.
Large scale digital atlases in neuroscience
NASA Astrophysics Data System (ADS)
Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.
2014-03-01
Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.
Food appropriation through large scale land acquisitions
NASA Astrophysics Data System (ADS)
Rulli, Maria Cristina; D'Odorico, Paolo
2014-05-01
The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.
Large-scale carbon fiber tests
NASA Technical Reports Server (NTRS)
Pride, R. A.
1980-01-01
A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.
Large-scale clustering of cosmic voids
NASA Astrophysics Data System (ADS)
Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent
2014-11-01
We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.
Curvature constraints from large scale structure
NASA Astrophysics Data System (ADS)
Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien
2016-06-01
We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.
Backscatter in Large-Scale Flows
NASA Astrophysics Data System (ADS)
Nadiga, Balu
2009-11-01
Downgradient mixing of potential-voriticity and its variants are commonly employed to model the effects of unresolved geostrophic turbulence on resolved scales. This is motivated by the (inviscid and unforced) particle-wise conservation of potential-vorticity and the mean forward or down-scale cascade of potential enstrophy in geostrophic turubulence. By examining the statistical distribution of the transfer of potential enstrophy from mean or filtered motions to eddy or sub-filter motions, we find that the mean forward cascade results from the forward-scatter being only slightly greater than the backscatter. Downgradient mixing ideas, do not recognize such equitable mean-eddy or large scale-small scale interactions and consequently model only the mean effect of forward cascade; the importance of capturing the effects of backscatter---the forcing of resolved scales by unresolved scales---are only beginning to be recognized. While recent attempts to model the effects of backscatter on resolved scales have taken a stochastic approach, our analysis suggests that these effects are amenable to being modeled deterministically.
Large scale molecular simulations of nanotoxicity.
Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong
2014-01-01
The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells.
Large-scale wind turbine structures
NASA Technical Reports Server (NTRS)
Spera, David A.
1988-01-01
The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.
Large-scale wind turbine structures
NASA Astrophysics Data System (ADS)
Spera, David A.
1988-05-01
The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.
NASA Astrophysics Data System (ADS)
Wang, Q.; Mu, M.; Dijkstra, H. A.
2012-04-01
We use the conditional nonlinear optimal perturbation (CNOP) approach to find optimal precursor of the formation of Kuroshio large meander (LM) path. Three non-large meander (NLM) states are utilized as reference states to calculate the CNOPs. The results demonstrate that the CNOPs can result in the formation of the significant LM path. Simultaneously, we calculate the first singular vector (FSV), which is the linear counterpart of CNOP, and investigate its effects on the Kuroshio path. We found that the FSV with the same amplitude as the CNOP does not trigger a typical Kuroshio LM path. Hence, the CNOP is regarded as an optimal precursor of the formation of the LM path. Furthermore, we analyze the formation processes of the LM path and find that potential vorticity (PV) advection plays an important role in the formation process. The PV advection caused by the FSV perturbation is smaller than that caused by the CNOP perturbation and hence explains why the CNOP is favorable as a precursor over the FSV.
Parallel block schemes for large scale least squares computations
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment of the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.
An informal paper on large-scale dynamic systems
NASA Technical Reports Server (NTRS)
Ho, Y. C.
1975-01-01
Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.
Maestro: An Orchestration Framework for Large-Scale WSN Simulations
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
Aircraft design for mission performance using nonlinear multiobjective optimization methods
NASA Technical Reports Server (NTRS)
Dovi, Augustine R.; Wrenn, Gregory A.
1990-01-01
A new technique which converts a constrained optimization problem to an unconstrained one where conflicting figures of merit may be simultaneously considered was combined with a complex mission analysis system. The method is compared with existing single and multiobjective optimization methods. A primary benefit from this new method for multiobjective optimization is the elimination of separate optimizations for each objective, which is required by some optimization methods. A typical wide body transport aircraft is used for the comparative studies.
International space station. Large scale integration approach
NASA Astrophysics Data System (ADS)
Cohen, Brad
The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?
Large Scale Flame Spread Environmental Characterization Testing
NASA Technical Reports Server (NTRS)
Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.
2013-01-01
Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation
Li, Hancao; Haddad, Wassim M.
2012-01-01
We develop optimal respiratory airflow patterns using a nonlinear multicompartment model for a lung mechanics system. Specifically, we use classical calculus of variations minimization techniques to derive an optimal airflow pattern for inspiratory and expiratory breathing cycles. The physiological interpretation of the optimality criteria used involves the minimization of work of breathing and lung volume acceleration for the inspiratory phase, and the minimization of the elastic potential energy and rapid airflow rate changes for the expiratory phase. Finally, we numerically integrate the resulting nonlinear two-point boundary value problems to determine the optimal airflow patterns over the inspiratory and expiratory breathing cycles. PMID:22719793
NASA Astrophysics Data System (ADS)
Lapert, M.; Tehini, R.; Turinici, G.; Sugny, D.
2008-08-01
We consider the optimal control of quantum systems interacting nonlinearly with an electromagnetic field. We propose monotonically convergent algorithms to solve the optimal equations. The monotonic behavior of the algorithm is ensured by a nonstandard choice of the cost, which is not quadratic in the field. These algorithms can be constructed for pure- and mixed-state quantum systems. The efficiency of the method is shown numerically for molecular orientation with a nonlinearity of order 3 in the field. Discretizing the amplitude and the phase of the Fourier transform of the optimal field, we show that the optimal solution can be well approximated by pulses that could be implemented experimentally.
Synchronization of coupled large-scale Boolean networks
Li, Fangfei
2014-03-15
This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.
Sheltering in buildings from large-scale outdoor releases
Chan, W.R.; Price, P.N.; Gadgil, A.J.
2004-06-01
Intentional or accidental large-scale airborne toxic release (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. Under these circumstances, taking shelter in buildings can be an effective emergency response strategy. Some examples where shelter-in-place was successful at preventing injuries and casualties have been documented [1, 2]. As public education and preparedness are vital to ensure the success of an emergency response, many agencies have prepared documents advising the public on what to do during and after sheltering [3, 4, 5]. In this document, we will focus on the role buildings play in providing protection to occupants. The conclusions to this article are: (1) Under most circumstances, shelter-in-place is an effective response against large-scale outdoor releases. This is particularly true for release of short duration (a few hours or less) and chemicals that exhibit non-linear dose-response characteristics. (2) The building envelope not only restricts the outdoor-indoor air exchange, but can also filter some biological or even chemical agents. Once indoors, the toxic materials can deposit or sorb onto indoor surfaces. All these processes contribute to the effectiveness of shelter-in-place. (3) Tightening of building envelope and improved filtration can enhance the protection offered by buildings. Common mechanical ventilation system present in most commercial buildings, however, should be turned off and dampers closed when sheltering from an outdoor release. (4) After the passing of the outdoor plume, some residuals will remain indoors. It is therefore important to terminate shelter-in-place to minimize exposure to the toxic materials.
Large-scale quantum photonic circuits in silicon
NASA Astrophysics Data System (ADS)
Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk
2016-08-01
Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards
Modelling large-scale halo bias using the bispectrum
NASA Astrophysics Data System (ADS)
Pollack, Jennifer E.; Smith, Robert E.; Porciani, Cristiano
2012-03-01
We study the relation between the density distribution of tracers for large-scale structure and the underlying matter distribution - commonly termed bias - in the Λ cold dark matter framework. In particular, we examine the validity of the local model of biasing at quadratic order in the matter density. This model is characterized by parameters b1 and b2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales. We find that, whilst the fits are reasonably good, the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no smoothing scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo and halo-mass power spectra and from these construct estimates of the effective large-scale bias as a guide for b1. We measure the configuration dependence of the halo bispectra Bhhh and reduced bispectra Qhhh for very large-scale k-space triangles. From these data, we constrain b1 and b2, taking into account the full bispectrum covariance matrix. Using the lowest order perturbation theory, we find that for Bhhh the best-fitting parameters are in reasonable agreement with one another as the triangle scale is varied, although the fits become poor as smaller scales are included. The same is true for Qhhh. The best-fitting values were found to depend on the discreteness correction. This led us to consider halo-mass cross-bispectra. The results from these statistics supported our earlier findings. We then developed a test to explore whether the inconsistency in the recovered bias parameters could be attributed to missing higher order corrections in the models. We prove that low-order expansions are not sufficiently accurate to model the data, even on scales k1˜ 0.04 h Mpc-1. If robust inferences concerning bias are to be drawn
Large-Scale Spacecraft Fire Safety Tests
NASA Technical Reports Server (NTRS)
Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde
2014-01-01
An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests
Atypical Behavior Identification in Large Scale Network Traffic
Best, Daniel M.; Hafen, Ryan P.; Olsen, Bryan K.; Pike, William A.
2011-10-23
Cyber analysts are faced with the daunting challenge of identifying exploits and threats within potentially billions of daily records of network traffic. Enterprise-wide cyber traffic involves hundreds of millions of distinct IP addresses and results in data sets ranging from terabytes to petabytes of raw data. Creating behavioral models and identifying trends based on those models requires data intensive architectures and techniques that can scale as data volume increases. Analysts need scalable visualization methods that foster interactive exploration of data and enable identification of behavioral anomalies. Developers must carefully consider application design, storage, processing, and display to provide usability and interactivity with large-scale data. We present an application that highlights atypical behavior in enterprise network flow records. This is accomplished by utilizing data intensive architectures to store the data, aggregation techniques to optimize data access, statistical techniques to characterize behavior, and a visual analytic environment to render the behavioral trends, highlight atypical activity, and allow for exploration.
Large-scale structure of time evolving citation networks
NASA Astrophysics Data System (ADS)
Leicht, E. A.; Clarkson, G.; Shedden, K.; Newman, M. E. J.
2007-09-01
In this paper we examine a number of methods for probing and understanding the large-scale structure of networks that evolve over time. We focus in particular on citation networks, networks of references between documents such as papers, patents, or court cases. We describe three different methods of analysis, one based on an expectation-maximization algorithm, one based on modularity optimization, and one based on eigenvector centrality. Using the network of citations between opinions of the United States Supreme Court as an example, we demonstrate how each of these methods can reveal significant structural divisions in the network and how, ultimately, the combination of all three can help us develop a coherent overall picture of the network's shape.
Large-scale asymmetric synthesis of a cathepsin S inhibitor.
Lorenz, Jon C; Busacca, Carl A; Feng, XuWu; Grinberg, Nelu; Haddad, Nizar; Johnson, Joe; Kapadia, Suresh; Lee, Heewon; Saha, Anjan; Sarvestani, Max; Spinelli, Earl M; Varsolona, Rich; Wei, Xudong; Zeng, Xingzhong; Senanayake, Chris H
2010-02-19
A potent reversible inhibitor of the cysteine protease cathepsin-S was prepared on large scale using a convergent synthetic route, free of chromatography and cryogenics. Late-stage peptide coupling of a chiral urea acid fragment with a functionalized aminonitrile was employed to prepare the target, using 2-hydroxypyridine as a robust, nonexplosive replacement for HOBT. The two key intermediates were prepared using a modified Strecker reaction for the aminonitrile and a phosphonation-olefination-rhodium-catalyzed asymmetric hydrogenation sequence for the urea. A palladium-catalyzed vinyl transfer coupled with a Claisen reaction was used to produce the aldehyde required for the side chain. Key scale up issues, safety calorimetry, and optimization of all steps for multikilogram production are discussed. PMID:20102230
Zhong, Xiangnan; He, Haibo; Zhang, Huaguang; Wang, Zhanshan
2014-12-01
In this paper, we develop and analyze an optimal control method for a class of discrete-time nonlinear Markov jump systems (MJSs) with unknown system dynamics. Specifically, an identifier is established for the unknown systems to approximate system states, and an optimal control approach for nonlinear MJSs is developed to solve the Hamilton-Jacobi-Bellman equation based on the adaptive dynamic programming technique. We also develop detailed stability analysis of the control approach, including the convergence of the performance index function for nonlinear MJSs and the existence of the corresponding admissible control. Neural network techniques are used to approximate the proposed performance index function and the control law. To demonstrate the effectiveness of our approach, three simulation studies, one linear case, one nonlinear case, and one single link robot arm case, are used to validate the performance of the proposed optimal control method.
Zhong, Xiangnan; He, Haibo; Zhang, Huaguang; Wang, Zhanshan
2014-12-01
In this paper, we develop and analyze an optimal control method for a class of discrete-time nonlinear Markov jump systems (MJSs) with unknown system dynamics. Specifically, an identifier is established for the unknown systems to approximate system states, and an optimal control approach for nonlinear MJSs is developed to solve the Hamilton-Jacobi-Bellman equation based on the adaptive dynamic programming technique. We also develop detailed stability analysis of the control approach, including the convergence of the performance index function for nonlinear MJSs and the existence of the corresponding admissible control. Neural network techniques are used to approximate the proposed performance index function and the control law. To demonstrate the effectiveness of our approach, three simulation studies, one linear case, one nonlinear case, and one single link robot arm case, are used to validate the performance of the proposed optimal control method. PMID:25420238
Large-scale sparse singular value computations
NASA Technical Reports Server (NTRS)
Berry, Michael W.
1992-01-01
Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.
Model building, control and optimization of large scale systems
Basar, T.
1993-02-22
This report covers the research progress made during the calendar year 1992. The new results obtained during this period are described, keyed to the references listed on the last two pages of this report.
Optimizing Large Scale Carbon Fluxes for North America
NASA Astrophysics Data System (ADS)
Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Ulliasz, M.; Parazoo, N. C.
2008-12-01
We combine the SiB3 biosphere model with the RAMS mesoscale meteorology model and associated Lagrangian particle dispersion model (LPDM) and use CO2 observations from a 8-tower network in 2004 to correct a priori ecosystem respiration (ER) and gross primary productivity (GPP) fluxes for a domain consisting of most of North America. Results are presented as weekly corrections to ER and GPP for 2004. A sink is recovered from the inversion but is smaller than expected due to the limited constraint imposed by the sampling footprint of the 8-tower observing network. The sensitivities of the inversion to independently derived boundary conditions, different fossil fuel sources, and various parameters in the inversion are analyzed and discussed.
Optimization of Large Scale HEP Data Analysis in LHCb
NASA Astrophysics Data System (ADS)
Remenska, Daniela; Aaij, Roel; Raven, Gerhard; Merk, Marcel; Templon, Jeff; Bril, Reinder J.; LHCb Collaboration
2011-12-01
Observation has lead to a conclusion that the physics analysis jobs run by LHCb physicists on a local computing farm (i.e. non-grid) require more efficient access to the data which resides on the Grid. Our experiments have shown that the I/O bound nature of the analysis jobs in combination with the latency due to the remote access protocols (e.g. rfio, dcap) cause a low CPU efficiency of these jobs. In addition to causing a low CPU efficiency, the remote access protocols give rise to high overhead (in terms of amount of data transferred). This paper gives an overview of the concept of pre-fetching and caching of input files in the proximity of the processing resources, which is exploited to cope with the I/O bound analysis jobs. The files are copied from Grid storage elements (using GridFTP), while concurrently performing computations, inspired from a similar idea used in the ATLAS experiment. The results illustrate that this file staging approach is relatively insensitive to the original location of the data, and a significant improvement can be achieved in terms of the CPU efficiency of an analysis job. Dealing with scalability of such a solution on the Grid environment is discussed briefly.
Geospatial optimization of siting large-scale solar projects
Macknick, Jordan; Quinby, Ted; Caulfield, Emmet; Gerritsen, Margot; Diffendorfer, James E.; Haines, Seth S.
2014-01-01
guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
Large-scale assembly of colloidal particles
NASA Astrophysics Data System (ADS)
Yang, Hongta
This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the
Population generation for large-scale simulation
NASA Astrophysics Data System (ADS)
Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul
2005-05-01
Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more
Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm
NASA Astrophysics Data System (ADS)
Kania, Adhe; Sidarto, Kuntjoro Adji
2016-02-01
Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.
Lossless Convexification of Control Constraints for a Class of Nonlinear Optimal Control Problems
NASA Technical Reports Server (NTRS)
Blackmore, Lars; Acikmese, Behcet; Carson, John M.,III
2012-01-01
In this paper we consider a class of optimal control problems that have continuous-time nonlinear dynamics and nonconvex control constraints. We propose a convex relaxation of the nonconvex control constraints, and prove that the optimal solution to the relaxed problem is the globally optimal solution to the original problem with nonconvex control constraints. This lossless convexification enables a computationally simpler problem to be solved instead of the original problem. We demonstrate the approach in simulation with a planetary soft landing problem involving a nonlinear gravity field.
A Large Scale Virtual Gas Sensor Array
NASA Astrophysics Data System (ADS)
Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre
2011-09-01
This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.
Precision Measurement of Large Scale Structure
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
2001-01-01
The purpose of this grant was to develop and to start to apply new precision methods for measuring the power spectrum and redshift distortions from the anticipated new generation of large redshift surveys. A highlight of work completed during the award period was the application of the new methods developed by the PI to measure the real space power spectrum and redshift distortions of the IRAS PSCz survey, published in January 2000. New features of the measurement include: (1) measurement of power over an unprecedentedly broad range of scales, 4.5 decades in wavenumber, from 0.01 to 300 h/Mpc; (2) at linear scales, not one but three power spectra are measured, the galaxy-galaxy, galaxy-velocity, and velocity-velocity power spectra; (3) at linear scales each of the three power spectra is decorrelated within itself, and disentangled from the other two power spectra (the situation is analogous to disentangling scalar and tensor modes in the Cosmic Microwave Background); and (4) at nonlinear scales the measurement extracts not only the real space power spectrum, but also the full line-of-sight pairwise velocity distribution in redshift space.
Multitree Algorithms for Large-Scale Astrostatistics
NASA Astrophysics Data System (ADS)
March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.
2012-03-01
this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being
Finite dimensional approximation of a class of constrained nonlinear optimal control problems
NASA Technical Reports Server (NTRS)
Gunzburger, Max D.; Hou, L. S.
1994-01-01
An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.
Large scale electromechanical transistor with application in mass sensing
Jin, Leisheng; Li, Lijie
2014-12-07
Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibration—an external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.
Bias in the effective field theory of large scale structures
Senatore, Leonardo
2015-11-01
We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local in space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. We describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/k{sub NL} and k/k{sub M}, where k is the wavenumber of interest, k{sub NL} is the wavenumber associated to the non-linear scale, and k{sub M} is the comoving wavenumber enclosing the mass of a galaxy.
Bias in the effective field theory of large scale structures
Senatore, Leonardo
2015-11-05
We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local in space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. Furthermore, we describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/k_{NL} and k/k_{M}, where k is the wavenumber of interest, k_{NL} is the wavenumber associated to the non-linear scale, and k_{M} is the comoving wavenumber enclosing the mass of a galaxy.
Bias in the effective field theory of large scale structures
Senatore, Leonardo
2015-11-05
We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local inmore » space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. Furthermore, we describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/kNL and k/kM, where k is the wavenumber of interest, kNL is the wavenumber associated to the non-linear scale, and kM is the comoving wavenumber enclosing the mass of a galaxy.« less
Large-scale recording of astrocyte activity
Nimmerjahn, Axel; Bergles, Dwight E.
2015-01-01
Astrocytes are highly ramified glial cells found throughout the central nervous system (CNS). They express a variety of neurotransmitter receptors that can induce widespread chemical excitation, placing these cells in an optimal position to exert global effects on brain physiology. However, the activity patterns of only a small fraction of astrocytes have been examined and techniques to manipulate their behavior are limited. As a result, little is known about how astrocytes modulate CNS function on synaptic, microcircuit, or systems levels. Here, we review current and emerging approaches for visualizing and manipulating astrocyte activity in vivo. Deciphering how astrocyte network activity is controlled in different physiological and pathological contexts is critical for defining their roles in the healthy and diseased CNS. PMID:25665733
Probes of large-scale structure in the universe
NASA Technical Reports Server (NTRS)
Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph
1988-01-01
A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.
NASA Astrophysics Data System (ADS)
Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick
2013-05-01
This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.
Optimized split-step method for modeling nonlinear pulse propagation in fiber Bragg gratings
Toroker, Zeev; Horowitz, Moshe
2008-03-15
We present an optimized split-step method for solving nonlinear coupled-mode equations that model wave propagation in nonlinear fiber Bragg gratings. By separately controlling the spatial and the temporal step size of the solution, we could significantly decrease the run time duration without significantly affecting the result accuracy. The accuracy of the method and the dependence of the error on the algorithm parameters are studied in several examples. Physical considerations are given to determine the required resolution.
Optimization of the dynamic behavior of strongly nonlinear heterogeneous materials
NASA Astrophysics Data System (ADS)
Herbold, Eric B.
New aspects of strongly nonlinear wave and structural phenomena in granular media are developed numerically, theoretically and experimentally. One-dimensional chains of particles and compressed powder composites are the two main types of materials considered here. Typical granular assemblies consist of linearly elastic spheres or layers of masses and effective nonlinear springs in one-dimensional columns for dynamic testing. These materials are highly sensitive to initial and boundary conditions, making them useful for acoustic and shock-mitigating applications. One-dimensional assemblies of spherical particles are examples of strongly nonlinear systems with unique properties. For example, if initially uncompressed, these materials have a sound speed equal to zero (sonic vacuum), supporting strongly nonlinear compression solitary waves with a finite width. Different types of assembled metamaterials will be presented with a discussion of the material's response to static compression. The acoustic diode effect will be presented, which may be useful in shock mitigation applications. Systems with controlled dissipation will also be discussed from an experimental and theoretical standpoint emphasizing the critical viscosity that defines the transition from an oscillatory to monotonous shock profile. The dynamic compression of compressed powder composites may lead to self-organizing mesoscale structures in two and three dimensions. A reactive granular material composed of a compressed mixture of polytetrafluoroethylene (PTFE), tungsten (W) and aluminum (Al) fine-grain powders exhibit this behavior. Quasistatic, Hopkinson bar, and drop-weight experiments show that composite materials with a high porosity and fine metallic particles exhibit a higher strength than less porous mixtures with larger particles, given the same mass fraction of constituents. A two-dimensional Eulerian hydrocode is implemented to investigate the mechanical deformation and failure of the compressed
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
Large Scale Turbulent Structures in Supersonic Jets
NASA Technical Reports Server (NTRS)
Rao, Ram Mohan; Lundgren, Thomas S.
1997-01-01
Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean
Large Scale Turbulent Structures in Supersonic Jets
NASA Technical Reports Server (NTRS)
Rao, Ram Mohan; Lundgren, Thomas S.
1997-01-01
Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
A Newton-CG method for large-scale three-dimensional elastic full-waveform seismic inversion
NASA Astrophysics Data System (ADS)
Epanomeritakis, I.; Akçelik, V.; Ghattas, O.; Bielak, J.
2008-06-01
We present a nonlinear optimization method for large-scale 3D elastic full-waveform seismic inversion. The method combines outer Gauss-Newton nonlinear iterations with inner conjugate gradient linear iterations, globalized by an Armijo backtracking line search, solved on a sequence of finer grids and higher frequencies to remain in the vicinity of the global optimum, inexactly terminated to prevent oversolving, preconditioned by L-BFGS/Frankel, regularized by a total variation operator to capture sharp interfaces, finely discretized by finite elements in the Lamé parameter space to provide flexibility and avoid bias, implemented in matrix-free fashion with adjoint-based computation of reduced gradient and reduced Hessian-vector products, checkpointed to avoid full spacetime waveform storage, and partitioned spatially across processors to parallelize the solutions of the forward and adjoint wave equations and the evaluation of gradient-like information. Several numerical examples demonstrate the grid independence of linear and nonlinear iterations, the effectiveness of the preconditioner, the ability to solve inverse problems with up to 17 million inversion parameters on up to 2048 processors, the effectiveness of multiscale continuation in keeping iterates in the basin of attraction of the global minimum, and the ability to fit the observational data while reconstructing the model with reasonable resolution and capturing sharp interfaces.
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results. PMID:26111400
PID controller design of nonlinear systems using an improved particle swarm optimization approach
NASA Astrophysics Data System (ADS)
Chang, Wei-Der; Shih, Shun-Peng
2010-11-01
In this paper, an improved particle swarm optimization is presented to search for the optimal PID controller gains for a class of nonlinear systems. The proposed algorithm is to modify the velocity formula of the general PSO systems in order for improving the searching efficiency. In the improved PSO-based nonlinear PID control system design, three PID control gains, i.e., the proportional gain Kp, integral gain Ki, and derivative gain Kd are required to form a parameter vector which is called a particle. It is the basic component of PSO systems and many such particles further constitute a population. To derive the optimal PID gains for nonlinear systems, two principle equations, the modified velocity updating and position updating equations, are employed to move the positions of all particles in the population. In the meanwhile, an objective function defined for PID controller optimization problems may be minimized. To validate the control performance of the proposed method, a typical nonlinear system control, the inverted pendulum tracking control, is illustrated. The results testify that the improved PSO algorithm can perform well in the nonlinear PID control system design.
A monotonic method for nonlinear optimal control problems with concave dependence on the state
NASA Astrophysics Data System (ADS)
Salomon, Julien; Turinici, Gabriel
2011-03-01
Initially introduced in the framework of quantum control, the so-called monotonic algorithms have demonstrated very good numerical performance when dealing with bilinear optimal control problems. This article presents a unified formulation that can be applied to more general nonlinear settings compatible with the hypothesis detailed below. In this framework, we show that the well-posedness of the general algorithm is related to a nonlinear evolution equation. We prove the existence of the solution to the evolution equation and give important properties of the optimal control functional. Finally we show how the algorithm works for selected models from the literature. We also compare the algorithm with the gradient algorithm.
Safeguards instruments for Large-Scale Reprocessing Plants
Hakkila, E.A.; Case, R.S.; Sonnier, C.
1993-06-01
Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.
The Challenge of Large-Scale Literacy Improvement
ERIC Educational Resources Information Center
Levin, Ben
2010-01-01
This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…
NASA Astrophysics Data System (ADS)
Hocker, David; Yan, Julia; Rabitz, Herschel
2016-05-01
Bose-Einstein condensates (BECs) offer the potential to examine quantum behavior at large length and time scales, as well as forming promising candidates for quantum technology applications. Thus, the manipulation of BECs using control fields is a topic of prime interest. We consider BECs in the mean-field model of the Gross-Pitaevskii equation (GPE), which contains linear and nonlinear features, both of which are subject to control. In this work we report successful optimal control simulations of a one-dimensional GPE by modulation of the linear and nonlinear terms to stimulate transitions into excited coherent modes. The linear and nonlinear controls are allowed to freely vary over space and time to seek their optimal forms. The determination of the excited coherent modes targeted for optimization is numerically performed through an adaptive imaginary time propagation method. Numerical simulations are performed for optimal control of mode-to-mode transitions between the ground coherent mode and the excited modes of a BEC trapped in a harmonic well. The results show greater than 99 % success for nearly all trials utilizing reasonable initial guesses for the controls, and analysis of the optimal controls reveals primarily direct transitions between initial and target modes. The success of using solely the nonlinearity term as a control opens up further research toward exploring novel control mechanisms inaccessible to linear Schrödinger-type systems.
NASA Astrophysics Data System (ADS)
Wang, Bin; Chiang, Hsiao-Dong
Many applications of smart grid can be formulated as constrained optimization problems. Because of the discrete controls involved in power systems, these problems are essentially mixed-integer nonlinear programs. In this paper, we review the Trust-Tech-based methodology for solving mixed-integer nonlinear optimization. Specifically, we have developed a two-stage Trust-Tech-based methodology to systematically compute all the local optimal solutions for constrained mixed-integer nonlinear programming (MINLP) problems. In the first stage, for a given MINLP problem this methodology starts with the construction of a new, continuous, unconstrained problem through relaxation and the penalty function method. A corresponding dynamical system is then constructed to search for a set of local optimal solutions for the unconstrained problem. In the second stage, a reduced constrained NLP is defined for each local optimal solution by determining and fixing the values of integral variables of the MINLP problem. The Trust-Tech-based method is used to compute a set of local optimal solutions for these reduced NLP problems, from which the optimal solution of the original MINLP problem is determined. A numerical simulation of several testing problems is provided to illustrate the effectiveness of our proposed method.
A new approach to the Pontryagin maximum principle for nonlinear fractional optimal control problems
NASA Astrophysics Data System (ADS)
Ali, Hegagi M.; Pereira, Fernando Lobo; Gama, Sílvio M. A.
2016-09-01
In this paper, we discuss a new general formulation of fractional optimal control problems whose performance index is in the fractional integral form and the dynamics are given by a set of fractional differential equations in the Caputo sense. We use a new approach to prove necessary conditions of optimality in the form of Pontryagin maximum principle for fractional nonlinear optimal control problems. Moreover, a new method based on a generalization of the Mittag-Leffler function is used to solving this class of fractional optimal control problems. A simple example is provided to illustrate the effectiveness of our main result.
Weak lensing of large scale structure in the presence of screening
Tessore, Nicolas; Metcalf, R. Benton; Giocoli, Carlo E-mail: hans.winther@astro.ox.ac.uk E-mail: pedro.ferreira@physics.ox.ac.uk
2015-10-01
A number of alternatives to general relativity exhibit gravitational screening in the non-linear regime of structure formation. We describe a set of algorithms that can produce weak lensing maps of large scale structure in such theories and can be used to generate mock surveys for cosmological analysis. By analysing a few basic statistics we indicate how these alternatives can be distinguished from general relativity with future weak lensing surveys.
Effects of Design Properties on Parameter Estimation in Large-Scale Assessments
ERIC Educational Resources Information Center
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas
2015-01-01
The selection of an appropriate booklet design is an important element of large-scale assessments of student achievement. Two design properties that are typically optimized are the "balance" with respect to the positions the items are presented and with respect to the mutual occurrence of pairs of items in the same booklet. The purpose…
NASA Technical Reports Server (NTRS)
Stahara, S. S.
1984-01-01
An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.
NASA Technical Reports Server (NTRS)
Li, Xiao-Fan; Finkbeiner, Joshua; Raman, Ganesh; Daniels, Christopher; Steinetz, Bruce M.
2003-01-01
Optimizing resonator shapes for maximizing the ratio of maximum to minimum gas pressure at an end of the resonator is investigated numerically. It is well known that the resonant frequencies and the nonlinear standing waveform in an acoustical resonator strongly depend on the resonator geometry. A quasi-Newton type scheme was used to find optimized axisymmetric resonator shapes achieving the maximum pressure compression ratio with an acceleration of constant amplitude. The acoustical field was solved using a one-dimensional model, and the resonance frequency shift and hysteresis effects were obtained through an automation scheme based on continuation method. Results are presented for optimizing three types of geometry: a cone, a horn-cone and a half cosine- shape. For each type, different optimized shapes were found when starting with different initial guesses. Further, the one-dimensional model was modified to study the effect of an axisymmetric central blockage on the nonlinear standing wave.
NASA Technical Reports Server (NTRS)
Li, Xiaofan; Finkbeiner, Joshua; Raman, Ganesh; Daniels, Christopher; Steinetz, Bruce M.
2003-01-01
Optimizing resonator shapes for maximizing the ratio of maximum to minimum gas pressure at an end of the resonator is investigated numerically. It is well known that the resonant frequencies and the nonlinear standing waveform in an acoustical resonator strongly depend on the resonator geometry. A quasi-Newton type scheme was used to find optimized axisymmetric resonator shapes achieving the maximum pressure compression ratio with an acceleration of constant amplitude. The acoustical field was solved using a one-dimensional model, and the resonance frequency shift and hysteresis effects were obtained through an automation scheme based on continuation method. Results are presented for optimizing three types of geometry: a cone, a horn-cone and a half cosine-shape. For each type, different optimized shapes were found when starting with different initial guesses. Further, the one-dimensional model was modified to study the effect of an axisymmetric central blockage on the nonlinear standing wave.
Soft-Pion theorems for large scale structure
NASA Astrophysics Data System (ADS)
Horn, Bart; Hui, Lam; Xiao, Xiao
2014-09-01
Consistency relations — which relate an N-point function to a squeezed (N+1)-point function — are useful in large scale structure (LSS) because of their non-perturbative nature: they hold even if the N-point function is deep in the nonlinear regime, and even if they involve astrophysically messy galaxy observables. The non-perturbative nature of the consistency relations is guaranteed by the fact that they are symmetry statements, in which the velocity plays the role of the soft pion. In this paper, we address two issues: (1) how to derive the relations systematically using the residual coordinate freedom in the Newtonian gauge, and relate them to known results in ζ-gauge (often used in studies of inflation); (2) under what conditions the consistency relations are violated. In the non-relativistic limit, our derivation reproduces the Newtonian consistency relation discovered by Kehagias & Riotto and Peloso & Pietroni. More generally, there is an infinite set of consistency relations, as is known in ζ-gauge. There is a one-to-one correspondence between symmetries in the two gauges; in particular, the Newtonian consistency relation follows from the dilation and special conformal symmetries in ζ-gauge. We probe the robustness of the consistency relations by studying models of galaxy dynamics and biasing. We give a systematic list of conditions under which the consistency relations are violated; violations occur if the galaxy bias is non-local in an infrared divergent way. We emphasize the relevance of the adiabatic mode condition, as distinct from symmetry considerations. As a by-product of our investigation, we discuss a simple fluid Lagrangian for LSS.
Ensemble assimilation of global large-scale precipitation
NASA Astrophysics Data System (ADS)
Lien, Guo-Yuan
Many attempts to assimilate precipitation observations in numerical models have been made, but they have resulted in little or no forecast improvement at the end of the precipitation assimilation. This is due to the nonlinearity of the model precipitation parameterization, the non-Gaussianity of precipitation variables, and the large and unknown model and observation errors. In this study, we investigate the assimilation of global large-scale satellite precipitation using the local ensemble transform Kalman filter (LETKF). The LETKF does not require linearization of the model, and it can improve all model variables by giving higher weights in the analysis to ensemble members with better precipitation, so that the model will "remember" the assimilation changes during the forecasts. Gaussian transformations of precipitation are applied to both model background precipitation and observed precipitation, which not only makes the error distributions more Gaussian, but also removes the amplitude-dependent biases between the model and the observations. In addition, several quality control criteria are designed to reject precipitation observations that are not useful for the assimilation. Our ideas are tested in both an idealized system and a realistic system. In the former, observing system simulation experiments (OSSEs) are conducted with a simplified general circulation model; in the latter, the TRMM Multisatellite Precipitation Analysis (TMPA) data are assimilated into a low-resolution version of the NCEP Global Forecasting System (GFS). Positive results are obtained in both systems, showing that both the analyses and the 5-day forecasts are improved by the effective assimilation of precipitation. We also demonstrate how to use the ensemble forecast sensitivity to observations (EFSO) to analyze the effectiveness of precipitation assimilation and provide guidance for determining appropriate quality control. These results are very promising for the direct assimilation of
Testing gravity using large-scale redshift-space distortions
NASA Astrophysics Data System (ADS)
Raccanelli, Alvise; Bertacca, Daniele; Pietrobon, Davide; Schmidt, Fabian; Samushia, Lado; Bartolo, Nicola; Doré, Olivier; Matarrese, Sabino; Percival, Will J.
2013-11-01
We use luminous red galaxies from the Sloan Digital Sky Survey (SDSS) II to test the cosmological structure growth in two alternatives to the standard Λ cold dark matter (ΛCDM)+general relativity (GR) cosmological model. We compare observed three-dimensional clustering in SDSS Data Release 7 (DR7) with theoretical predictions for the standard vanilla ΛCDM+GR model, unified dark matter (UDM) cosmologies and the normal branch Dvali-Gabadadze-Porrati (nDGP). In computing the expected correlations in UDM cosmologies, we derive a parametrized formula for the growth factor in these models. For our analysis we apply the methodology tested in Raccanelli et al. and use the measurements of Samushia et al. that account for survey geometry, non-linear and wide-angle effects and the distribution of pair orientation. We show that the estimate of the growth rate is potentially degenerate with wide-angle effects, meaning that extremely accurate measurements of the growth rate on large scales will need to take such effects into account. We use measurements of the zeroth and second-order moments of the correlation function from SDSS DR7 data and the Large Suite of Dark Matter Simulations (LasDamas), and perform a likelihood analysis to constrain the parameters of the models. Using information on the clustering up to rmax = 120 h-1 Mpc, and after marginalizing over the bias, we find, for UDM models, a speed of sound c∞ ≤ 6.1e-4, and, for the nDGP model, a cross-over scale rc ≥ 340 Mpc, at 95 per cent confidence level.
Soft-Pion theorems for large scale structure
Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lhui@astro.columbia.edu
2014-09-01
Consistency relations — which relate an N-point function to a squeezed (N+1)-point function — are useful in large scale structure (LSS) because of their non-perturbative nature: they hold even if the N-point function is deep in the nonlinear regime, and even if they involve astrophysically messy galaxy observables. The non-perturbative nature of the consistency relations is guaranteed by the fact that they are symmetry statements, in which the velocity plays the role of the soft pion. In this paper, we address two issues: (1) how to derive the relations systematically using the residual coordinate freedom in the Newtonian gauge, and relate them to known results in ζ-gauge (often used in studies of inflation); (2) under what conditions the consistency relations are violated. In the non-relativistic limit, our derivation reproduces the Newtonian consistency relation discovered by Kehagias and Riotto and Peloso and Pietroni. More generally, there is an infinite set of consistency relations, as is known in ζ-gauge. There is a one-to-one correspondence between symmetries in the two gauges; in particular, the Newtonian consistency relation follows from the dilation and special conformal symmetries in ζ-gauge. We probe the robustness of the consistency relations by studying models of galaxy dynamics and biasing. We give a systematic list of conditions under which the consistency relations are violated; violations occur if the galaxy bias is non-local in an infrared divergent way. We emphasize the relevance of the adiabatic mode condition, as distinct from symmetry considerations. As a by-product of our investigation, we discuss a simple fluid Lagrangian for LSS.
Applications of large-scale computation to particle accelerators
Herrmannsfeldt, W.B.
1991-05-01
The rapid growth in the power of large-scale computers has had a revolutionary effect on the study of charged-particle accelerators that is similar to the impact of smaller computers on everyday life. Before an accelerator is built, it is now the absolute rule to simulate every component and subsystem by computer to establish modes of operation and tolerances. We will bypass the important and fruitful areas of control and operation, and consider only application to design and diagnostic interpretation. Applications of computers can be divided into separate categories including: component design, system design, stability studies, cost optimization, and operating condition simulation. For the purposes of this report, we will choose a few examples from the above categories to illustrate the methods used, and discuss the significance of the work to the project. We also briefly discuss the accelerator project itself. The examples that will be discussed are: The design of accelerator structures for electron-positron linear colliders and circular colliding beam systems, simulation of the wake fields from multibunch electron beams for linear colliders. Particle-in-cell simulation of space-charge dominated beams for an experimental linear induction accelerator for Heavy Ion Fusion.
Large-Scale NASA Science Applications on the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Brooks, Walter
2005-01-01
Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.
Scalable NIC-based reduction on large-scale clusters
Moody, A.; Fernández, J. C.; Petrini, F.; Panda, Dhabaleswar K.
2003-01-01
Many parallel algorithms require effiaent support for reduction mllectives. Over the years, researchers have developed optimal reduction algonduns by taking inm account system size, dam size, and complexities of reduction operations. However, all of these algorithm have assumed the faa that the reduction precessing takes place on the host CPU. Modem Network Interface Cards (NICs) sport programmable processors with substantial memory and thus introduce a fresh variable into the equation This raises the following intersting challenge: Can we take advantage of modern NICs to implementJost redudion operations? In this paper, we take on this challenge in the context of large-scale clusters. Through experiments on the 960-node, 1920-processor or ASCI Linux Cluster (ALC) located at the Lawrence Livermore National Laboratory, we show that NIC-based reductions indeed perform with reduced latency and immed consistency over host-based aleorithms for the wmmon case and that these benefits scale as the system grows. In the largest configuration tested--1812 processors-- our NIC-based algorithm can sum a single element vector in 73 ps with 32-bi integers and in 118 with Mbit floating-point numnbers. These results represent an improvement, respeaively, of 121% and 39% with resvect w the {approx}roductionle vel MPI library
[A nonlinear multi-compartment lung model for optimization of breathing airflow pattern].
Cai, Yongming; Gu, Lingyan; Chen, Fuhua
2015-02-01
It is difficult to select the appropriate ventilation mode in clinical mechanical ventilation. This paper presents a nonlinear multi-compartment lung model to solve the difficulty. The purpose is to optimize respiratory airflow patterns and get the minimum of the work of inspiratory phrase and lung volume acceleration, minimum of the elastic potential energy and rapidity of airflow rate changes of expiratory phrase. Sigmoidal function is used to smooth the respiratory function of nonlinear equations. The equations are established to solve nonlinear boundary conditions BVP, and finally the problem was solved with gradient descent method. Experimental results showed that lung volume and the rate of airflow after optimization had good sensitivity and convergence speed. The results provide a theoretical basis for the development of multivariable controller monitoring critically ill mechanically ventilated patients. PMID:25997262
A hybrid symbolic/finite-element algorithm for solving nonlinear optimal control problems
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1991-01-01
The general code described is capable of solving difficult nonlinear optimal control problems by using finite elements and a symbolic manipulator. Quick and accurate solutions are obtained with a minimum for user interaction. Since no user programming is required for most problems, there are tremendous savings to be gained in terms of time and money.
Application of multi-objective nonlinear optimization technique for coordinated ramp-metering
Haj Salem, Habib; Farhi, Nadir; Lebacque, Jean Patrick E-mail: nadir.frahi@ifsttar.fr
2015-03-10
This paper aims at developing a multi-objective nonlinear optimization algorithm applied to coordinated motorway ramp metering. The multi-objective function includes two components: traffic and safety. Off-line simulation studies were performed on A4 France Motorway including 4 on-ramps.
Nonlinear stability in reaction-diffusion systems via optimal Lyapunov functions
NASA Astrophysics Data System (ADS)
Lombardo, S.; Mulone, G.; Trovato, M.
2008-06-01
We define optimal Lyapunov functions to study nonlinear stability of constant solutions to reaction-diffusion systems. A computable and finite radius of attraction for the initial data is obtained. Applications are given to the well-known Brusselator model and a three-species model for the spatial spread of rabies among foxes.
Distribution probability of large-scale landslides in central Nepal
NASA Astrophysics Data System (ADS)
Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi
2014-12-01
Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.
Large scale stochastic spatio-temporal modelling with PCRaster
NASA Astrophysics Data System (ADS)
Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.
2013-04-01
software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.
Local and Regional Impacts of Large Scale Wind Energy Deployment
NASA Astrophysics Data System (ADS)
Michalakes, J.; Hammond, S.; Lundquist, J. K.; Moriarty, P.; Robinson, M.
2010-12-01
resources and upscaling large scale wind farm impact on local and regional climate. It will bridge localized and larger scale interactions of renewable energy generation with energy resource and grid management system control. By 2030, when 20 percent wind energy penetration is planned and exascale computing resources have become commonplace, we envision such a system spanning the entire mesoscale to sub-millimeter range of scales to provide a real-time computational and systems control capability to optimize renewable based generating and grid distribution for efficiency and with minimizing environmental impact.
State of the Art in Large-Scale Soil Moisture Monitoring
NASA Technical Reports Server (NTRS)
Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; Zreda, Marek G.
2013-01-01
Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.
NASA Astrophysics Data System (ADS)
Siade, A. J.; Prommer, H.; Welter, D.
2014-12-01
Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site
Method for nonlinear optimization for gas tagging and other systems
Chen, T.; Gross, K.C.; Wegerich, S.
1998-01-06
A method and system are disclosed for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established. 6 figs.
Method for nonlinear optimization for gas tagging and other systems
Chen, Ting; Gross, Kenny C.; Wegerich, Stephan
1998-01-01
A method and system for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established.
Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong
2015-04-01
Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness.
Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong
2015-04-01
Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness. PMID:25794375
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
NASA Astrophysics Data System (ADS)
Yang, Xiong; Liu, Derong; Wang, Ding
2014-03-01
In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.
NASA Astrophysics Data System (ADS)
Liu, Derong; Huang, Yuzhu; Wang, Ding; Wei, Qinglai
2013-09-01
In this paper, an observer-based optimal control scheme is developed for unknown nonlinear systems using adaptive dynamic programming (ADP) algorithm. First, a neural-network (NN) observer is designed to estimate system states. Then, based on the observed states, a neuro-controller is constructed via ADP method to obtain the optimal control. In this design, two NN structures are used: a three-layer NN is used to construct the observer which can be applied to systems with higher degrees of nonlinearity and without a priori knowledge of system dynamics, and a critic NN is employed to approximate the value function. The optimal control law is computed using the critic NN and the observer NN. Uniform ultimate boundedness of the closed-loop system is guaranteed. The actor, critic, and observer structures are all implemented in real-time, continuously and simultaneously. Finally, simulation results are presented to demonstrate the effectiveness of the proposed control scheme.
Polymer Physics of the Large-Scale Structure of Chromatin.
Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario
2016-01-01
We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments. PMID:27659986
Large-scale anisotropy of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, J.; Wilson, M. L.
1981-01-01
Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.
Polymer Physics of the Large-Scale Structure of Chromatin.
Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario
2016-01-01
We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.
Needs, opportunities, and options for large scale systems research
Thompson, G.L.
1984-10-01
The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.
Large scale anomalies in the microwave background: causation and correlation.
Aslanyan, Grigor; Easther, Richard
2013-12-27
Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.
Evolution of optimal Hill coefficients in nonlinear public goods games.
Archetti, Marco; Scheuring, István
2016-10-01
In evolutionary game theory, the effect of public goods like diffusible molecules has been modelled using linear, concave, sigmoid and step functions. The observation that biological systems are often sigmoid input-output functions, as described by the Hill equation, suggests that a sigmoid function is more realistic. The Michaelis-Menten model of enzyme kinetics, however, predicts a concave function, and while mechanistic explanations of sigmoid kinetics exist, we lack an adaptive explanation: what is the evolutionary advantage of a sigmoid benefit function? We analyse public goods games in which the shape of the benefit function can evolve, in order to determine the optimal and evolutionarily stable Hill coefficients. We find that, while the dynamics depends on whether output is controlled at the level of the individual or the population, intermediate or high Hill coefficients often evolve, leading to sigmoid input-output functions that for some parameters are so steep to resemble a step function (an on-off switch). Our results suggest that, even when the shape of the benefit function is unknown, biological public goods should be modelled using a sigmoid or step function rather than a linear or concave function.
Grid Based Nonlinear Filtering Revisited: Recursive Estimation & Asymptotic Optimality
NASA Astrophysics Data System (ADS)
Kalogerias, Dionysios S.; Petropulu, Athina P.
2016-08-01
We revisit the development of grid based recursive approximate filtering of general Markov processes in discrete time, partially observed in conditionally Gaussian noise. The grid based filters considered rely on two types of state quantization: The \\textit{Markovian} type and the \\textit{marginal} type. We propose a set of novel, relaxed sufficient conditions, ensuring strong and fully characterized pathwise convergence of these filters to the respective MMSE state estimator. In particular, for marginal state quantizations, we introduce the notion of \\textit{conditional regularity of stochastic kernels}, which, to the best of our knowledge, constitutes the most relaxed condition proposed, under which asymptotic optimality of the respective grid based filters is guaranteed. Further, we extend our convergence results, including filtering of bounded and continuous functionals of the state, as well as recursive approximate state prediction. For both Markovian and marginal quantizations, the whole development of the respective grid based filters relies more on linear-algebraic techniques and less on measure theoretic arguments, making the presentation considerably shorter and technically simpler.
Evolution of optimal Hill coefficients in nonlinear public goods games.
Archetti, Marco; Scheuring, István
2016-10-01
In evolutionary game theory, the effect of public goods like diffusible molecules has been modelled using linear, concave, sigmoid and step functions. The observation that biological systems are often sigmoid input-output functions, as described by the Hill equation, suggests that a sigmoid function is more realistic. The Michaelis-Menten model of enzyme kinetics, however, predicts a concave function, and while mechanistic explanations of sigmoid kinetics exist, we lack an adaptive explanation: what is the evolutionary advantage of a sigmoid benefit function? We analyse public goods games in which the shape of the benefit function can evolve, in order to determine the optimal and evolutionarily stable Hill coefficients. We find that, while the dynamics depends on whether output is controlled at the level of the individual or the population, intermediate or high Hill coefficients often evolve, leading to sigmoid input-output functions that for some parameters are so steep to resemble a step function (an on-off switch). Our results suggest that, even when the shape of the benefit function is unknown, biological public goods should be modelled using a sigmoid or step function rather than a linear or concave function. PMID:27343626
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1993-01-01
The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Disentangling the dynamic core: a research program for a neurodynamics at the large-scale.
Le Van Quyen, Michel
2003-01-01
My purpose in this paper is to sketch a research direction based on Francisco Varela's pioneering work in neurodynamics (see also Rudrauf et al. 2003, in this issue). Very early on he argued that the internal coherence of every mental-cognitive state lies in the global self-organization of the brain activities at the large-scale, constituting a fundamental pole of integration called here a "dynamic core". Recent neuroimaging evidence appears to broadly support this hypothesis and suggests that a global brain dynamics emerges at the large scale level from the cooperative interactions among widely distributed neuronal populations. Despite a growing body of evidence supporting this view, our understanding of these large-scale brain processes remains hampered by the lack of a theoretical language for expressing these complex behaviors in dynamical terms. In this paper, I propose a rough cartography of a comprehensive approach that offers a conceptual and mathematical framework to analyze spatio-temporal large-scale brain phenomena. I emphasize how these nonlinear methods can be applied, what property might be inferred from neuronal signals, and where one might productively proceed for the future. This paper is dedicated, with respect and affection, to the memory of Francisco Varela.
The three-point function as a probe of models for large-scale structure
Frieman, J.A.; Gaztanaga, E.
1993-06-19
The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard {Omega} = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R{sub p} {approximately}20 h{sup {minus}1} Mpc, e.g., low-matter-density (non-zero cosmological constant) models, {open_quote}tilted{close_quote} primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q{sub J} at large scales, r {approx_gt} R{sub p}. Current observational constraints on the three-point amplitudes Q{sub 3} and S{sub 3} can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Guevara, V R
2004-02-01
A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.
NASA Astrophysics Data System (ADS)
Wöhling, T.; Geiges, A.; Gosses, M.; Nowak, W.
2014-12-01
Data acquisition in complex environmental systems is typically expensive. Therefore, experimental designs should be optimized such that most can be learned about the system at least costs. In the past, optimal design (OD) analyses were mainly restricted to linear or linearized problems and methods. Nonlinear OD methods offer more efficient data collection strategies, because they can better handle the non-linearity exhibited by most coupled environmental systems. However, the much higher computational demand restricts their applicability to models with comparatively low run-times. Our goal is to compare the trade-off between computational efficiency and the obtainable design quality between linear and nonlinear OD methods. In our study, a steady-state model for a section of the river Steinlach (South Germany) was set up and calibrated to measured groundwater head data and on estimated groundwater exchange fluxes. The model involves a Pilot Point parameterization scheme for hydraulic conductivity and six zones with uncertain river bed conductivities. In the linear OD approach, the initial predictive uncertainty of groundwater exchange fluxes and mean travel times are estimated using the PREDUNC utility (Moore and Doherty 2005) of PEST. The parameter calibration was performed with a non-linear global search. A discrete global search method and PREDUNC was then utilized to identify augmented monitoring strategies (additional n measurement locations and data types) that reduce the predictive uncertainty the most. For the nonlinear assessment, a conditional ensemble obtained with Markov-chain Monte Carlo represents the initial state of uncertainty and is used as input to a nonlinear OD framework called PreDIA (Leube et al. 2012). PreDIA can consider any kind of uncertainties and non-linear (statistical) dependencies in data, models, parameters and system drivers during the OD process. The linear and non-linear approaches are compared thoroughly during each step of the
Large-scale studies of marked birds in North America
Tautin, J.; Metras, L.; Smith, G.
1999-01-01
The first large-scale, co-operative, studies of marked birds in North America were attempted in the 1950s. Operation Recovery, which linked numerous ringing stations along the east coast in a study of autumn migration of passerines, and the Preseason Duck Ringing Programme in prairie states and provinces, conclusively demonstrated the feasibility of large-scale projects. The subsequent development of powerful analytical models and computing capabilities expanded the quantitative potential for further large-scale projects. Monitoring Avian Productivity and Survivorship, and Adaptive Harvest Management are current examples of truly large-scale programmes. Their exemplary success and the availability of versatile analytical tools are driving changes in the North American bird ringing programme. Both the US and Canadian ringing offices are modifying operations to collect more and better data to facilitate large-scale studies and promote a more project-oriented ringing programme. New large-scale programmes such as the Cornell Nest Box Network are on the horizon.
A study of MLFMA for large-scale scattering problems
NASA Astrophysics Data System (ADS)
Hastriter, Michael Larkin
This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.
Large-scale motions in a plane wall jet
NASA Astrophysics Data System (ADS)
Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt
2015-11-01
The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.
Vrabie, Draguna; Lewis, Frank
2009-04-01
In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided.
Vrabie, Draguna; Lewis, Frank
2009-04-01
In this paper we present in a continuous-time framework an online approach to direct adaptive optimal control with infinite horizon cost for nonlinear systems. The algorithm converges online to the optimal control solution without knowledge of the internal system dynamics. Closed-loop dynamic stability is guaranteed throughout. The algorithm is based on a reinforcement learning scheme, namely Policy Iterations, and makes use of neural networks, in an Actor/Critic structure, to parametrically represent the control policy and the performance of the control system. The two neural networks are trained to express the optimal controller and optimal cost function which describes the infinite horizon control performance. Convergence of the algorithm is proven under the realistic assumption that the two neural networks do not provide perfect representations for the nonlinear control and cost functions. The result is a hybrid control structure which involves a continuous-time controller and a supervisory adaptation structure which operates based on data sampled from the plant and from the continuous-time performance dynamics. Such control structure is unlike any standard form of controllers previously seen in the literature. Simulation results, obtained considering two second-order nonlinear systems, are provided. PMID:19362449
Survey Design for Large-Scale, Unstructured Resistivity Surveys
NASA Astrophysics Data System (ADS)
Labrecque, D. J.; Casale, D.
2009-12-01
In this paper, we discuss the issues in designing data collection strategies for large-scale, poorly structured resistivity surveys. Existing or proposed applications for these types of surveys include carbon sequestration, enhanced oil recovery monitoring, monitoring of leachate from working or abandoned mines, and mineral surveys. Electrode locations are generally chosen by land access, utilities, roads, existing wells etc. Classical arrays such as the Wenner array or dipole-dipole arrays are not applicable if the electrodes cannot be placed in quasi-regular lines or grids. A new, far more generalized strategy is needed for building data collection schemes. Following the approach of earlier two-dimensional (2-D) survey designs, the proposed method begins by defining a base array. In (2-D) design, this base array is often a standard dipole-dipole array. For unstructured three-dimensional (3-D) design, determining this base array is a multi-step process. The first step is to determine a set of base dipoles with similar characteristics. For example, the base dipoles may consist of electrode pairs trending within 30 degrees of north and with a length between 100 and 250 m in length. These dipoles are then combined into a trial set of arrays. This trial set of arrays is reduced by applying a series of filters based on criteria such as separation between the dipoles. Using the base array set, additional arrays are added and tested to determine the overall improvement in resolution and to determine an optimal set of arrays. Examples of the design process are shown for a proposed carbon sequestration monitoring system.
NASA Astrophysics Data System (ADS)
Shoemaker, Christine; Wan, Ying
2016-04-01
Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).
NASA Astrophysics Data System (ADS)
Saviz, M. R.
2015-11-01
In this paper a nonlinear approach to studying the vibration characteristic of laminated composite plate with surface-bonded piezoelectric layer/patch is formulated, based on the Green Lagrange type of strain-displacements relations, by incorporating higher-order terms arising from nonlinear relations of kinematics into mathematical formulations. The equations of motion are obtained through the energy method, based on Lagrange equations and by using higher-order shear deformation theories with von Karman-type nonlinearities, so that transverse shear strains vanish at the top and bottom surfaces of the plate. An isoparametric finite element model is provided to model the nonlinear dynamics of the smart plate with piezoelectric layer/ patch. Different boundary conditions are investigated. Optimal locations of piezoelectric patches are found using a genetic algorithm to maximize spatial controllability/observability and considering the effect of residual modes to reduce spillover effect. Active attenuation of vibration of laminated composite plate is achieved through an optimal control law with inequality constraint, which is related to the maximum and minimum values of allowable voltage in the piezoelectric elements. To keep the voltages of actuator pairs in an allowable limit, the Pontryagin’s minimum principle is implemented in a system with multi-inequality constraint of control inputs. The results are compared with similar ones, proving the accuracy of the model especially for the structures undergoing large deformations. The convergence is studied and nonlinear frequencies are obtained for different thickness ratios. The structural coupling between plate and piezoelectric actuators is analyzed. Some examples with new features are presented, indicating that the piezo-patches significantly improve the damping characteristics of the plate for suppressing the geometrically nonlinear transient vibrations.
Generation and saturation of large-scale flows in flute turbulence
Sandberg, I.; Isliker, H.; Pavlenko, V. P.; Hizanidis, K.; Vlahos, L.
2005-03-01
The excitation and suppression of large-scale anisotropic modes during the temporal evolution of a magnetic-curvature-driven electrostatic flute instability are numerically investigated. The formation of streamerlike structures is attributed to the linear development of the instability while the subsequent excitation of the zonal modes is the result of the nonlinear coupling between linearly grown flute modes. When the amplitudes of the zonal modes become of the same order as that of the streamer modes, the flute instabilities get suppressed and poloidal (zonal) flows dominate. In the saturated state that follows, the dominant large-scale modes of the potential and the density are self-organized in different ways, depending on the value of the ion temperature.
Large-scale modeling of rain fields from a rain cell deterministic model
NASA Astrophysics Data System (ADS)
FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia
2006-04-01
A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.
Report of the Workshop on Petascale Systems Integration for LargeScale Facilities
Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin
2007-10-01
There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.
A variant constrained genetic algorithm for solving conditional nonlinear optimal perturbations
NASA Astrophysics Data System (ADS)
Zheng, Qin; Sha, Jianxin; Shu, Hang; Lu, Xiaoqing
2014-01-01
A variant constrained genetic algorithm (VCGA) for effective tracking of conditional nonlinear optimal perturbations (CNOPs) is presented. Compared with traditional constraint handling methods, the treatment of the constraint condition in VCGA is relatively easy to implement. Moreover, it does not require adjustments to indefinite parameters. Using a hybrid crossover operator and the newly developed multi-ply mutation operator, VCGA improves the performance of GAs. To demonstrate the capability of VCGA to catch CNOPS in non-smooth cases, a partial differential equation, which has "onoff" switches in its forcing term, is employed as the nonlinear model. To search global CNOPs of the nonlinear model, numerical experiments using VCGA, the traditional gradient descent algorithm based on the adjoint method (ADJ), and a GA using tournament selection operation and the niching technique (GA-DEB) were performed. The results with various initial reference states showed that, in smooth cases, all three optimization methods are able to catch global CNOPs. Nevertheless, in non-smooth situations, a large proportion of CNOPs captured by the ADJ are local. Compared with ADJ, the performance of GA-DEB shows considerable improvement, but it is far below VCGA. Further, the impacts of population sizes on both VCGA and GA-DEB were investigated. The results were used to estimate the computation time of VCGA and GA-DEB in obtaining CNOPs. The computational costs for VCGA, GA-DEB and ADJ to catch CNOPs of the nonlinear model are also compared.
Optimal nonlinear excitation of decadal variability of the North Atlantic thermohaline circulation
NASA Astrophysics Data System (ADS)
Zu, Ziqing; Mu, Mu; Dijkstra, Henk A.
2013-11-01
Nonlinear development of salinity perturbations in the Atlantic thermohaline circulation (THC) is investigated with a three-dimensional ocean circulation model, using the conditional nonlinear optimal perturbation method. The results show two types of optimal initial perturbations of sea surface salinity, one associated with freshwater and the other with salinity. Both types of perturbations excite decadal variability of the THC. Under the same amplitude of initial perturbation, the decadal variation induced by the freshwater perturbation is much stronger than that by the salinity perturbation, suggesting that the THC is more sensitive to freshwater than salinity perturbation. As the amplitude of initial perturbation increases, the decadal variations become stronger for both perturbations. For salinity perturbations, recovery time of the THC to return to steady state gradually saturates with increasing amplitude, whereas this recovery time increases remarkably for freshwater perturbations. A nonlinear (advective) feedback between density and velocity anomalies is proposed to explain these characteristics of decadal variability excitation. The results are consistent with previous ones from simple box models, and highlight the importance of nonlinear feedback in decadal THC variability.
Kang, Mingon; Gao, Jean; Tang, Liping
2011-01-01
Developing vigorous mathematical equations and estimating accurate parameters within feasible computational time are two indispensable parts to build reliable system models for representing biological properties of the system and for producing reliable simulation. For a complex biological system with limited observations, one of the daunting tasks is the large number of unknown parameters in the mathematical modeling whose values directly determine the performance of computational modeling. To tackle this problem, we have developed a data-driven global optimization method, nonlinear RANSAC, based on RANdom SAmple Consensus (a.k.a. RANSAC) method for parameter estimation of nonlinear system models. Conventional RANSAC method is sound and simple, but it is oriented for linear system models. We not only adopt the strengths of RANSAC, but also extend the method to nonlinear systems with outstanding performance. As a specific application example, we have targeted understanding phagocyte transmigration which is involved in the fibrosis process for biomedical device implantation. With well-defined mathematical nonlinear equations of the system, nonlinear RANSAC is performed for the parameter estimation. In order to evaluate the general performance of the method, we also applied the method to signalling pathways with ordinary differential equations as a general format. PMID:23227455
Solution of nonlinear optimal control problems by the interpolating scaling functions
NASA Astrophysics Data System (ADS)
Foroozandeh, Z.; Shamsi, M.
2012-03-01
This paper presents a numerical method for solving nonlinear optimal control problems including state and control inequality constraints. The method is based upon interpolating scaling functions. The differential and integral expressions which arise in the system dynamics, the performance index and the boundary conditions are converted into some algebraic equations which can be solved for the unknown coefficients. Illustrative examples are included to demonstrate the validity and applicability of the technique.
Analysis and design of robust decentralized controllers for nonlinear systems
Schoenwald, D.A.
1993-07-01
Decentralized control strategies for nonlinear systems are achieved via feedback linearization techniques. New results on optimization and parameter robustness of non-linear systems are also developed. In addition, parametric uncertainty in large-scale systems is handled by sensitivity analysis and optimal control methods in a completely decentralized framework. This idea is applied to alleviate uncertainty in friction parameters for the gimbal joints on Space Station Freedom. As an example of decentralized nonlinear control, singular perturbation methods and distributed vibration damping are merged into a control strategy for a two-link flexible manipulator.
Role of the conjugated spacer in the optimization of second-order nonlinear chromophores
NASA Astrophysics Data System (ADS)
Pérez-Moreno, Javier; Clays, Koen; Kuzyk, Mark G.
2009-08-01
We investigate the role of the conjugated spacer in the optimization of the first hyperpolarizability of organic chromophores. We propose a novel strategy for the optimization of the first hyperpolarizability that is based on the variation of the degree of conjugation for the bridge that separates the donor and acceptors at the end of push-pull type chromophores. The correlation between the type of conjugated spacer and the experimental nonlinear performance of the chromophores is investigated and interpreted in the context of the quantum limits.
On Managing the Use of Surrogates in General Nonlinear Optimization and MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.
1998-01-01
This paper is concerned with a trust region approximation management framework (AMF) for solving the nonlinear programming problem in general and multidisciplinary optimization problems in particular The intent of the AMF methodology is to facilitate the solution of optimization problems with high-fidelity models. While such models are designed to approximate the physical phenomena they describe to a high degree of accuracy, their use in a repetitive procedure, for example, iterations of an optimization or a search algorithm, make such use prohibitively expensive. An improvement in design with lower-fidelity, cheaper models, however, does not guarantee a corresponding improvement for the higher-fidelity problem. The AMF methodology proposed here is based on a class of multilevel methods for constrained optimization and is designed to manage the use of variable-fidelity approximations or models in a systematic way that assures convergence to critical points of the original high-fidelity problem.
NASA Astrophysics Data System (ADS)
Hocker, David Lance
The control of quantum systems occurs across a broad range of length and energy scales in modern science, and efforts have demonstrated that locating suitable controls to perform a range of objectives has been widely successful. The justification for this success arises from a favorable topology of a quantum control landscape, defined as a mapping of the controls to a cost function measuring the success of the operation. This is summarized in the landscape principle that no suboptimal extrema exist on the landscape for well-suited control problems, explaining a trend of successful optimizations in both theory and experiment. This dissertation explores what additional lessons may be gleaned from the quantum control landscape through numerical and theoretical studies. The first topic examines the experimentally relevant problem of assessing and reducing disturbances due to noise. The local curvature of the landscape is found to play an important role on noise effects in the control of targeted quantum unitary operations, and provides a conceptual framework for assessing robustness to noise. Software for assessing noise effects in quantum computing architectures was also developed and applied to survey the performance of current quantum control techniques for quantum computing. A lack of competition between robustness and perfect unitary control operation was discovered to fundamentally limit noise effects, and highlights a renewed focus upon system engineering for reducing noise. This convergent behavior generally arises for any secondary objective in the situation of high primary objective fidelity. The other dissertation topic examines the utility of quantum control for a class of nonlinear Hamiltonians not previously considered under the landscape principle. Nonlinear Schrodinger equations are commonly used to model the dynamics of Bose-Einstein condensates (BECs), one of the largest known quantum objects. Optimizations of BEC dynamics were performed in which the
Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows
Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R
2014-01-01
High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.
Do Large-Scale Topological Features Correlate with Flare Properties?
NASA Astrophysics Data System (ADS)
DeRosa, Marc L.; Barnes, Graham
2016-05-01
In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.
Acoustic Studies of the Large Scale Ocean Circulation
NASA Technical Reports Server (NTRS)
Menemenlis, Dimitris
1999-01-01
Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.
Coupling between convection and large-scale circulation
NASA Astrophysics Data System (ADS)
Becker, T.; Stevens, B. B.; Hohenegger, C.
2014-12-01
The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.
Human pescadillo induces large-scale chromatin unfolding.
Zhang, Hao; Fang, Yan; Huang, Cuifen; Yang, Xiao; Ye, Qinong
2005-06-01
The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.
A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.
Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen
2015-01-01
The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.
Magnetic Helicity and Large Scale Magnetic Fields: A Primer
NASA Astrophysics Data System (ADS)
Blackman, Eric G.
2015-05-01
Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.
Clearing and Labeling Techniques for Large-Scale Biological Tissues
Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon
2016-01-01
Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813
Survey of decentralized control methods. [for large scale dynamic systems
NASA Technical Reports Server (NTRS)
Athans, M.
1975-01-01
An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.
Corridors Increase Plant Species Richness at Large Scales
Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.
2006-09-01
Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.
Large-scale superfluid vortex rings at nonzero temperatures
NASA Astrophysics Data System (ADS)
Wacks, D. H.; Baggaley, A. W.; Barenghi, C. F.
2014-12-01
We numerically model experiments in which large-scale vortex rings—bundles of quantized vortex loops—are created in superfluid helium by a piston-cylinder arrangement. We show that the presence of a normal-fluid vortex ring together with the quantized vortices is essential to explain the coherence of these large-scale vortex structures at nonzero temperatures, as observed experimentally. Finally we argue that the interaction of superfluid and normal-fluid vortex bundles is relevant to recent investigations of superfluid turbulence.
The Effective Field Theory of Large Scale Structures at two loops
Carrasco, John Joseph M.; Foreman, Simon; Green, Daniel; Senatore, Leonardo E-mail: sfore@stanford.edu E-mail: senatore@stanford.edu
2014-07-01
Large scale structure surveys promise to be the next leading probe of cosmological information. It is therefore crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbation theory for the weakly non-linear regime of dark matter, where correlation functions are computed in an expansion of the wavenumber k of a mode over the wavenumber associated with the non-linear scale k{sub NL}. Since most of the information is contained at high wavenumbers, it is necessary to compute higher order corrections to correlation functions. After the one-loop correction to the matter power spectrum, we estimate that the next leading one is the two-loop contribution, which we compute here. At this order in k/k{sub NL}, there is only one counterterm in the EFTofLSS that must be included, though this term contributes both at tree-level and in several one-loop diagrams. We also discuss correlation functions involving the velocity and momentum fields. We find that the EFTofLSS prediction at two loops matches to percent accuracy the non-linear matter power spectrum at redshift zero up to k∼ 0.6 h Mpc{sup −1}, requiring just one unknown coefficient that needs to be fit to observations. Given that Standard Perturbation Theory stops converging at redshift zero at k∼ 0.1 h Mpc{sup −1}, our results demonstrate the possibility of accessing a factor of order 200 more dark matter quasi-linear modes than naively expected. If the remaining observational challenges to accessing these modes can be addressed with similar success, our results show that there is tremendous potential for large scale structure surveys to explore the primordial universe.
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be
The effect of background turbulence on the propagation of large-scale flames
NASA Astrophysics Data System (ADS)
Matalon, Moshe
2008-12-01
This paper is based on an invited presentation at the Conference on Turbulent Mixing and Beyond held in the Abdus Salam International Center for Theoretical Physics, Trieste, Italy (August 2007). It consists of a summary of recent investigations aimed at understanding the nature and consequences of the Darrieus-Landau instability that is prominent in premixed combustion. It describes rigorous asymptotic methodologies used to simplify the propagation problem of multi-dimensional and time-dependent premixed flames in order to understand the nonlinear evolution of hydrodynamically unstable flames. In particular, it addresses the effect of background turbulent noise on the structure and propagation of large-scale flames.
NASA Technical Reports Server (NTRS)
Liu, J. T. C.
1986-01-01
Advances in the mechanics of boundary layer flow are reported. The physical problems of large scale coherent structures in real, developing free turbulent shear flows, from the nonlinear aspects of hydrodynamic stability are addressed. The presence of fine grained turbulence in the problem, and its absence, lacks a small parameter. The problem is presented on the basis of conservation principles, which are the dynamics of the problem directed towards extracting the most physical information, however, it is emphasized that it must also involve approximations.
Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.
Kiumarsi, Bahare; Lewis, Frank L
2015-01-01
This paper presents a partially model-free adaptive optimal control solution to the deterministic nonlinear discrete-time (DT) tracking control problem in the presence of input constraints. The tracking error dynamics and reference trajectory dynamics are first combined to form an augmented system. Then, a new discounted performance function based on the augmented system is presented for the optimal nonlinear tracking problem. In contrast to the standard solution, which finds the feedforward and feedback terms of the control input separately, the minimization of the proposed discounted performance function gives both feedback and feedforward parts of the control input simultaneously. This enables us to encode the input constraints into the optimization problem using a nonquadratic performance function. The DT tracking Bellman equation and tracking Hamilton-Jacobi-Bellman (HJB) are derived. An actor-critic-based reinforcement learning algorithm is used to learn the solution to the tracking HJB equation online without requiring knowledge of the system drift dynamics. That is, two neural networks (NNs), namely, actor NN and critic NN, are tuned online and simultaneously to generate the optimal bounded control policy. A simulation example is given to show the effectiveness of the proposed method. PMID:25312944
Large-Scale Machine Learning for Classification and Search
ERIC Educational Resources Information Center
Liu, Wei
2012-01-01
With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…
The Large-Scale Structure of Scientific Method
ERIC Educational Resources Information Center
Kosso, Peter
2009-01-01
The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…
Potential and issues in large scale flood inundation modelling
NASA Astrophysics Data System (ADS)
Di Baldassarre, Giuliano; Brandimarte, Luigia; Dottori, Francesco; Mazzoleni, Maurizio; Yan, Kun
2015-04-01
The last years have seen a growing research interest on large scale flood inundation modelling. Nowadays, modelling tools and datasets allow for analyzing flooding processes at regional, continental and even global scale with an increasing level of detail. As a result, several research works have already addressed this topic using different methodologies of varying complexity. The potential of these studies is certainly enormous. Large scale flood inundation modelling can provide valuable information in areas where few information and studies were previously available. They can provide a consistent framework for a comprehensive assessment of flooding processes in the river basins of world's large rivers, as well as impacts of future climate scenarios. To make the most of such a potential, we believe it is necessary, on the one hand, to understand strengths and limitations of the existing methodologies, and on the other hand, to discuss possibilities and implications of using large scale flood models for operational flood risk assessment and management. Where should researchers put their effort, in order to develop useful and reliable methodologies and outcomes? How the information coming from large scale flood inundation studies can be used by stakeholders? How should we use this information where previous higher resolution studies exist, or where official studies are available?
DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS
The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...
International Large-Scale Assessments: What Uses, What Consequences?
ERIC Educational Resources Information Center
Johansson, Stefan
2016-01-01
Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…
Large Scale Survey Data in Career Development Research
ERIC Educational Resources Information Center
Diemer, Matthew A.
2008-01-01
Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…
Current Scientific Issues in Large Scale Atmospheric Dynamics
NASA Technical Reports Server (NTRS)
Miller, T. L. (Compiler)
1986-01-01
Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.
Large-scale drift and Rossby wave turbulence
NASA Astrophysics Data System (ADS)
Harper, K. L.; Nazarenko, S. V.
2016-08-01
We study drift/Rossby wave turbulence described by the large-scale limit of the Charney–Hasegawa–Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.
Large-scale drift and Rossby wave turbulence
NASA Astrophysics Data System (ADS)
Harper, K. L.; Nazarenko, S. V.
2016-08-01
We study drift/Rossby wave turbulence described by the large-scale limit of the Charney-Hasegawa-Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.
A bibliographical surveys of large-scale systems
NASA Technical Reports Server (NTRS)
Corliss, W. R.
1970-01-01
A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.
Resilience of Florida Keys coral communities following large scale disturbances
The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...
Lessons from Large-Scale Renewable Energy Integration Studies: Preprint
Bird, L.; Milligan, M.
2012-06-01
In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.
Large-Scale Networked Virtual Environments: Architecture and Applications
ERIC Educational Resources Information Center
Lamotte, Wim; Quax, Peter; Flerackers, Eddy
2008-01-01
Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…
Large-scale data analysis using the Wigner function
NASA Astrophysics Data System (ADS)
Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.
2012-04-01
Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.
Ecosystem resilience despite large-scale altered hydro climatic conditions
Technology Transfer Automated Retrieval System (TEKTRAN)
Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...
Large-scale societal changes and intentionality - an uneasy marriage.
Bodor, Péter; Fokas, Nikos
2014-08-01
Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable. PMID:25162863
Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround
ERIC Educational Resources Information Center
Peurach, Donald J.; Neumerski, Christine M.
2015-01-01
The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…
Simulation and Analysis of Large-Scale Compton Imaging Detectors
Manini, H A; Lange, D J; Wright, D M
2006-12-27
We perform simulations of two types of large-scale Compton imaging detectors. The first type uses silicon and germanium detector crystals, and the second type uses silicon and CdZnTe (CZT) detector crystals. The simulations use realistic detector geometry and parameters. We analyze the performance of each type of detector, and we present results using receiver operating characteristics (ROC) curves.
Considerations for Managing Large-Scale Clinical Trials.
ERIC Educational Resources Information Center
Tuttle, Waneta C.; And Others
1989-01-01
Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…
CACHE Guidelines for Large-Scale Computer Programs.
ERIC Educational Resources Information Center
National Academy of Engineering, Washington, DC. Commission on Education.
The Computer Aids for Chemical Engineering Education (CACHE) guidelines identify desirable features of large-scale computer programs including running cost and running-time limit. Also discussed are programming standards, documentation, program installation, system requirements, program testing, and program distribution. Lists of types of…
Over-driven control for large-scale MR dampers
NASA Astrophysics Data System (ADS)
Friedman, A. J.; Dyke, S. J.; Phillips, B. M.
2013-04-01
As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.
The Role of Plausible Values in Large-Scale Surveys
ERIC Educational Resources Information Center
Wu, Margaret
2005-01-01
In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…
Large-Scale Environmental Influences on Aquatic Animal Health
In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...
Large-Scale Innovation and Change in UK Higher Education
ERIC Educational Resources Information Center
Brown, Stephen
2013-01-01
This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…
Efficient On-Demand Operations in Large-Scale Infrastructures
ERIC Educational Resources Information Center
Ko, Steven Y.
2009-01-01
In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…
Assuring Quality in Large-Scale Online Course Development
ERIC Educational Resources Information Center
Parscal, Tina; Riemer, Deborah
2010-01-01
Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…
Cosmic strings and the large-scale structure
NASA Technical Reports Server (NTRS)
Stebbins, Albert
1988-01-01
A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
Improving the Utility of Large-Scale Assessments in Canada
ERIC Educational Resources Information Center
Rogers, W. Todd
2014-01-01
Principals and teachers do not use large-scale assessment results because the lack of distinct and reliable subtests prevents identifying strengths and weaknesses of students and instruction, the results arrive too late to be used, and principals and teachers need assistance to use the results to improve instruction so as to improve student…
Optimal Energy Measurement in Nonlinear Systems: An Application of Differential Geometry
NASA Technical Reports Server (NTRS)
Fixsen, Dale J.; Moseley, S. H.; Gerrits, T.; Lita, A.; Nam, S. W.
2014-01-01
Design of TES microcalorimeters requires a tradeoff between resolution and dynamic range. Often, experimenters will require linearity for the highest energy signals, which requires additional heat capacity be added to the detector. This results in a reduction of low energy resolution in the detector. We derive and demonstrate an algorithm that allows operation far into the nonlinear regime with little loss in spectral resolution. We use a least squares optimal filter that varies with photon energy to accommodate the nonlinearity of the detector and the non-stationarity of the noise. The fitting process we use can be seen as an application of differential geometry. This recognition provides a set of well-developed tools to extend our work to more complex situations. The proper calibration of a nonlinear microcalorimeter requires a source with densely spaced narrow lines. A pulsed laser multi-photon source is used here, and is seen to be a powerful tool for allowing us to develop practical systems with significant detector nonlinearity. The combination of our analysis techniques and the multi-photon laser source create a powerful tool for increasing the performance of future TES microcalorimeters.
Optimization of Nonlinear Dose- and Concentration-Response Models Utilizing Evolutionary Computation
Beam, Andrew L.; Motsinger-Reif, Alison A.
2011-01-01
An essential part of toxicity and chemical screening is assessing the concentrated related effects of a test article. Most often this concentration-response is a nonlinear, necessitating sophisticated regression methodologies. The parameters derived from curve fitting are essential in determining a test article’s potency (EC50) and efficacy (Emax) and variations in model fit may lead to different conclusions about an article’s performance and safety. Previous approaches have leveraged advanced statistical and mathematical techniques to implement nonlinear least squares (NLS) for obtaining the parameters defining such a curve. These approaches, while mathematically rigorous, suffer from initial value sensitivity, computational intensity, and rely on complex and intricate computational and numerical techniques. However if there is a known mathematical model that can reliably predict the data, then nonlinear regression may be equally viewed as parameter optimization. In this context, one may utilize proven techniques from machine learning, such as evolutionary algorithms, which are robust, powerful, and require far less computational framework to optimize the defining parameters. In the current study we present a new method that uses such techniques, Evolutionary Algorithm Dose Response Modeling (EADRM), and demonstrate its effectiveness compared to more conventional methods on both real and simulated data. PMID:22013401
Constraining dark energy evolution with gravitational lensing by large scale structures
Benabed, Karim; Waerbeke, Ludovic van
2004-12-15
We study the sensitivity of weak lensing by large scale structures as a probe of the evolution of dark energy. We explore a two-parameters model of dark energy evolution, inspired by tracking quintessence models. To this end, we compute the likelihood of a few fiducial models with varying and nonvarying equation of states. For the different models, we investigate the dark energy parameter degeneracies with the mass power spectrum shape {gamma}, normalization {sigma}{sub 8}, and with the matter mean density {omega}{sub M}. We find that degeneracies are such that weak lensing turns out to be a good probe of dark energy evolution, even with limited knowledge on {gamma}, {sigma}{sub 8}, and {omega}{sub M}. This result is a strong motivation for performing large scale structure simulations beyond the simple constant dark energy models, in order to calibrate the nonlinear regime accurately. Such calibration could then be used for any large scale structure tests of dark energy evolution. Prospective for the Canada France Hawaii Telescope Legacy Survey and Super-Novae Acceleration Probe are given. These results complement nicely the cosmic microwave background and supernovae constraints.
TOPOLOGY OF A LARGE-SCALE STRUCTURE AS A TEST OF MODIFIED GRAVITY
Wang Xin; Chen Xuelei; Park, Changbom
2012-03-01
The genus of the isodensity contours is a robust measure of the topology of a large-scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias, and redshift-space distortion. We show that the growth of density fluctuations is scale dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the isodensity contours, an intrinsic measure of the topology of the large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations grow at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures grow homologously, so we expect that the genus-smoothing-scale relation is basically time independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing-scale relation should change over time. This can be used to test the gravity models with large-scale structure observations. We study the cases of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21 cm radio surveys in the near future.
TRANSPORT OF LARGE-SCALE POLOIDAL FLUX IN BLACK HOLE ACCRETION
Beckwith, Kris; Hawley, John F.; Krolik, Julian H. E-mail: kris.beckwith@jila.colorado.ed E-mail: jhk@pha.jhu.ed
2009-12-10
We report on a global, three-dimensional GRMHD simulation of an accretion torus embedded in a large-scale vertical magnetic field orbiting a Schwarzschild black hole. This simulation investigates how a large-scale vertical field evolves within a turbulent accretion disk and whether global magnetic field configurations suitable for launching jets and winds can develop. We find that a 'coronal mechanism' of magnetic flux motion, which operates largely outside the disk body, dominates global flux evolution. In this mechanism, magnetic stresses driven by orbital shear create large-scale half-loops of magnetic field that stretch radially inward and then reconnect, leading to discontinuous jumps in the location of magnetic flux. In contrast, little or no flux is brought in directly by accretion within the disk itself. The coronal mechanism establishes a dipole magnetic field in the evacuated funnel around the orbital axis with a field intensity regulated by a combination of the magnetic and gas pressures in the inner disk. These results prompt a re-evaluation of previous descriptions of magnetic flux motion associated with accretion. Local pictures are undercut by the intrinsically global character of magnetic flux. Formulations in terms of an 'effective viscosity' competing with an 'effective resistivity' are undermined by the nonlinearity of the magnetic dynamics and the fact that the same turbulence driving mass motion (traditionally identified as 'viscosity') can alter magnetic topology.
On the renormalization of the effective field theory of large scale structures
Pajer, Enrico; Zaldarriaga, Matias E-mail: matiasz@ias.edu
2013-08-01
Standard perturbation theory (SPT) for large-scale matter inhomogeneities is unsatisfactory for at least three reasons: there is no clear expansion parameter since the density contrast is not small on all scales; it does not fully account for deviations at large scales from a perfect pressureless fluid induced by short-scale non-linearities; for generic initial conditions, loop corrections are UV-divergent, making predictions cutoff dependent and hence unphysical. The Effective Field Theory of Large Scale Structures successfully addresses all three issues. Here we focus on the third one and show explicitly that the terms induced by integrating out short scales, neglected in SPT, have exactly the right scale dependence to cancel all UV-divergences at one loop, and this should hold at all loops. A particularly clear example is an Einstein deSitter universe with no-scale initial conditions P{sub in} ∼ k{sup n}. After renormalizing the theory, we use self-similarity to derive a very simple result for the final power spectrum for any n, excluding two-loop corrections and higher. We show how the relative importance of different corrections depends on n. For n ∼ −1.5, relevant for our universe, pressure and dissipative corrections are more important than the two-loop corrections.
Ultra-large-scale Cosmology in Next-generation Experiments with Single Tracers
NASA Astrophysics Data System (ADS)
Alonso, David; Bull, Philip; Ferreira, Pedro G.; Maartens, Roy; Santos, Mário G.
2015-12-01
Future surveys of large-scale structure will be able to measure perturbations on the scale of the cosmological horizon, and so could potentially probe a number of novel relativistic effects that are negligibly small on sub-horizon scales. These effects leave distinctive signatures in the power spectra of clustering observables and, if measurable, would open a new window on relativistic cosmology. We quantify the size and detectability of the effects for the most relevant future large-scale structure experiments: spectroscopic and photometric galaxy redshift surveys, intensity mapping surveys of neutral hydrogen, and radio continuum surveys. Our forecasts show that next-generation experiments, reaching out to redshifts z≃ 4, will not be able to detect previously undetected general-relativistic effects by using individual tracers of the density field, although the contribution of weak lensing magnification on large scales should be clearly detectable. We also perform a rigorous joint forecast for the detection of primordial non-Gaussianity through the excess power it produces in the clustering of biased tracers on large scales, finding that uncertainties of σ ({f}{{NL}})∼ 1-2 should be achievable. We study the level of degeneracy of these large-scale effects with several tracer-dependent nuisance parameters, quantifying the minimal priors on the latter that are needed for an optimal measurement of the former. Finally, we discuss the systematic effects that must be mitigated to achieve this level of sensitivity, and some alternative approaches that should help to improve the constraints. The computational tools developed to carry out this study, which requires the full-sky computation of the theoretical angular power spectra for {O}(100) redshift bins, as well as realistic models of the luminosity function, are publicly available at http://intensitymapping.physics.ox.ac.uk/codes.html.
The optimal antenna for nonlinear spectroscopy of weakly and strongly scattering nanoobjects
NASA Astrophysics Data System (ADS)
Schumacher, Thorsten; Brandstetter, Matthias; Wolf, Daniela; Kratzer, Kai; Hentschel, Mario; Giessen, Harald; Lippitz, Markus
2016-04-01
Optical nanoantennas, i.e., arrangements of plasmonic nanostructures, promise to enhance the light-matter interaction on the nanoscale. In particular, nonlinear optical spectroscopy of single nanoobjects would profit from such an antenna, as nonlinear optical effects are already weak for bulk material, and become almost undetectable for single nanoobjects. We investigate the design of optical nanoantennas for transient absorption spectroscopy in two different cases: the mechanical breathing mode of a metal nanodisk and the quantum-confined carrier dynamics in a single CdSe nanowire. In the latter case, an antenna with a resonance at the desired wavelength optimally increases the light intensity at the nanoobject. In the first case, the perturbation of the antenna by the investigated nanosystem cannot be neglected and off-resonant antennas become most efficient.
Cumulus moistening, the diurnal cycle, and large-scale tropical dynamics
NASA Astrophysics Data System (ADS)
Ruppert, James H., Jr.
) vertical motion wwtg is diagnosed based on the internal diabatic heating in the model. wwtg is then used to advect model temperature and humidity. wwtg opposes domain-averaged temperature anomalies via adiabatic warming and cooling, thereby yielding a feedback between the model diabatic heating and the large-scale column moisture source associated with large-scale vertical motion. With a control simulation that successfully replicates a regime of shallow convection similar to nature, it is found through sensitivity tests that the diurnal cycle in tropospheric radiative heating is the dominant driver of both diurnal column moisture variations and nocturnal rainfall in this regime, the latter of which agrees with previous findings by Randall et al. The diurnal cycle in SST and surface fluxes, in turn, drives the daytime convective regime, which is distinct from the nocturnal regime by its rooting in the boundary layer. A simulation in which the diurnal cycle is stretched to 48 h amplifies an important nonlinear feedback at work in the diurnal cycle, which owes to the high-amplitude diurnal cycle in column relative humidity RH. This diurnal cycle in RH limits the amount of evaporation, and hence evaporative cooling, that takes place in the cloud layer. By throttling down the diabatic cooling, the diurnal cycle throttles down the daily-mean moisture sink driven by large-scale subsidence, such that the environment drifts toward a more moist state, all else being equal. When the diurnal cycle is not present, this nonlinear moisture source is weaker, and the environment drier. This feedback rectifies diurnal moistening onto longer timescales, thereby linking the diurnal cycle to longer timescales. These findings suggest that 1) the diurnal cycle of moist convection, as observed in DYNAMO, cannot be ruled out as an column moisture source important to MJO initiation, and 2) that proper representation of the diurnal cycle is prerequisite to accurate representation of large-scale
Robust Optimization of Fixed Points of Nonlinear Discrete Time Systems with Uncertain Parameters
NASA Astrophysics Data System (ADS)
Kastsian, Darya; Monnigmann, Martin
2010-01-01
This contribution extends the normal vector method for the optimization of parametrically uncertain dynamical systems to a general class of nonlinear discrete time systems. Essentially, normal vectors are used to state constraints on dynamical properties of fixed points in the optimization of discrete time dynamical systems. In a typical application of the method, a technical dynamical system is optimized with respect to an economic profit function, while the normal vector constraints are used to guarantee the stability of the optimal fixed point. We derive normal vector systems for flip, fold, and Neimark-Sacker bifurcation points, because these bifurcation points constitute the stability boundary of a large class of discrete time systems. In addition, we derive normal vector systems for a related type of critical point that can be used to ensure a user-specified disturbance rejection rate in the optimization of parametrically uncertain systems. We illustrate the method by applying it to the optimization of a discrete time supply chain model and a discretized fermentation process model.
An application of the square root information filter to large scale linear interconnected systems
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1977-01-01
It is demonstrated that use of the square root information filter (SRIF) can reduce the storage and computation required for estimation of certain classes of large-scale interconnected systems. The SRIF uses an information array that is related to the Kalman filter covariance and estimate. The SRIF algorithm, which is optimal, is a direct application of matrix partitioning to some optimal filtering algorithms described in the literature. The SRIF algorithm is able to reduce the storage requirements of a 40-subsystem 10-state problem by a full order of magnitude.
Discrete homotopy analysis for optimal trading execution with nonlinear transient market impact
NASA Astrophysics Data System (ADS)
Curato, Gianbiagio; Gatheral, Jim; Lillo, Fabrizio
2016-10-01
Optimal execution in financial markets is the problem of how to trade a large quantity of shares incrementally in time in order to minimize the expected cost. In this paper, we study the problem of the optimal execution in the presence of nonlinear transient market impact. Mathematically such problem is equivalent to solve a strongly nonlinear integral equation, which in our model is a weakly singular Urysohn equation of the first kind. We propose an approach based on Homotopy Analysis Method (HAM), whereby a well behaved initial trading strategy is continuously deformed to lower the expected execution cost. Specifically, we propose a discrete version of the HAM, i.e. the DHAM approach, in order to use the method when the integrals to compute have no closed form solution. We find that the optimal solution is front loaded for concave instantaneous impact even when the investor is risk neutral. More important we find that the expected cost of the DHAM strategy is significantly smaller than the cost of conventional strategies.
Cost Distribution of Environmental Flow Demands in a Large Scale Multi-Reservoir System
NASA Astrophysics Data System (ADS)
Marques, G.; Tilmant, A.
2014-12-01
This paper investigates the recovery of a prescribed flow regime through reservoir system reoperation, focusing on the associated costs and losses imposed on different power plants depending on flows, power plant and reservoir characteristics and systems topology. In large-scale reservoir systems such cost distribution is not trivial, and it should be properly evaluated to identify coordinated operating solutions that avoid penalizing a single reservoir. The methods combine an efficient stochastic dual dynamic programming algorithm for reservoir optimization subject to environmental flow targets with specific magnitude, duration and return period, which effects on fish recruitment are already known. Results indicate that the distribution of the effect of meeting the environmental flow demands throughout the reservoir cascade differs largely, and in some reservoirs power production and revenue are increased, while in others it is reduced. Most importantly, for the example system modeled here (10 reservoirs in the Parana River basin, Brazil) meeting the target environmental flows was possible without reducing the total energy produced in the year, at a cost of $25 Million/year in foregone hydropower revenues (3% reduction). Finally, the results and methods are useful in (a) quantifying the foregone hydropower and revenues resulting from meeting a specific environmental flow demand, (b) identifying the distribution and reallocation of the foregone hydropower and revenue across a large scale system, and (c) identifying optimal reservoir operating strategies to meet environmental flow demands in a large scale multi-reservoir system.
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.; Nguyen, Duc T.
2008-01-01
A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.
Analytic framework for peptidomics applied to large-scale neuropeptide identification.
Secher, Anna; Kelstrup, Christian D; Conde-Frieboes, Kilian W; Pyke, Charles; Raun, Kirsten; Wulff, Birgitte S; Olsen, Jesper V
2016-01-01
Large-scale mass spectrometry-based peptidomics for drug discovery is relatively unexplored because of challenges in peptide degradation and identification following tissue extraction. Here we present a streamlined analytical pipeline for large-scale peptidomics. We developed an optimized sample preparation protocol to achieve fast, reproducible and effective extraction of endogenous peptides from sub-dissected organs such as the brain, while diminishing unspecific protease activity. Each peptidome sample was analysed by high-resolution tandem mass spectrometry and the resulting data set was integrated with publically available databases. We developed and applied an algorithm that reduces the peptide complexity for identification of biologically relevant peptides. The developed pipeline was applied to rat hypothalamus and identifies thousands of neuropeptides and their post-translational modifications, which is combined in a resource format for visualization, qualitative and quantitative analyses. PMID:27142507
Analytic framework for peptidomics applied to large-scale neuropeptide identification
Secher, Anna; Kelstrup, Christian D.; Conde-Frieboes, Kilian W.; Pyke, Charles; Raun, Kirsten; Wulff, Birgitte S.; Olsen, Jesper V.
2016-01-01
Large-scale mass spectrometry-based peptidomics for drug discovery is relatively unexplored because of challenges in peptide degradation and identification following tissue extraction. Here we present a streamlined analytical pipeline for large-scale peptidomics. We developed an optimized sample preparation protocol to achieve fast, reproducible and effective extraction of endogenous peptides from sub-dissected organs such as the brain, while diminishing unspecific protease activity. Each peptidome sample was analysed by high-resolution tandem mass spectrometry and the resulting data set was integrated with publically available databases. We developed and applied an algorithm that reduces the peptide complexity for identification of biologically relevant peptides. The developed pipeline was applied to rat hypothalamus and identifies thousands of neuropeptides and their post-translational modifications, which is combined in a resource format for visualization, qualitative and quantitative analyses. PMID:27142507
Quasi Matrix Free Preconditioners in Optimization and Nonlinear Least-Squares
NASA Astrophysics Data System (ADS)
Bellavia, Stefania; Bertaccini, Daniele; Morini, Benedetta
2010-09-01
The approximate solution of several nonlinear optimization problems requires solving sequences of symmetric linear systems. When the number of variables is large, it is advisable to use an iterative linear solver for the Newton correction step. On the other hand, the underlying linear solver can converge slowly and the calculation of a preconditioner requires the computation of the Hessian matrix which usually represents a major task in the implementation. We propose here a way to overcome at least in part this two preconditioning issue.
Yang, Chao; Jiang, Wen; Chen, Dong-Hua; Adiga, Umesh; Ng, Esmond G.; Chiu, Wah
2008-07-28
The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.
Optimization of coherent optical OFDM transmitter using DP-IQ modulator with nonlinear response
NASA Astrophysics Data System (ADS)
Chang, Sun Hyok; Kang, Hun-Sik; Moon, Sang-Rok; Lee, Joon Ki
2016-07-01
In this paper, we investigate the performance of dual polarization orthogonal frequency division multiplexing (DP-OFDM) signal generation when the signal is generated by a DP-IQ optical modulator. The DP-IQ optical modulator is made of four parallel Mach-Zehnder modulators (MZMs) which have nonlinear responses and limited extinction ratios. We analyze the effects of the MZM in the DP-OFDM signal generation by numerical simulation. The operating conditions of the DP-IQ modulator are optimized to have the best performance of the DP-OFDM signal.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Prototype Vector Machine for Large Scale Semi-Supervised Learning
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.
The Large Scale Synthesis of Aligned Plate Nanostructures
Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli
2016-01-01
We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672
Electron drift in a large scale solid xenon
Yoo, J.; Jaskierny, W. F.
2015-08-21
A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less
Electron drift in a large scale solid xenon
Yoo, J.; Jaskierny, W. F.
2015-08-21
A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.
Large scale meteorological influence during the Geysers 1979 field experiment
Barr, S.
1980-01-01
A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.
GAIA: A WINDOW TO LARGE-SCALE MOTIONS
Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it
2012-08-10
Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.
The Large Scale Synthesis of Aligned Plate Nanostructures
NASA Astrophysics Data System (ADS)
Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli
2016-07-01
We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.
Large Scale Deformation of the Western U.S. Cordillera
NASA Technical Reports Server (NTRS)
Bennett, Richard A.
2002-01-01
The overall objective of the work that was conducted was to understand the present-day large-scale deformations of the crust throughout the western United States and in so doing to improve our ability to assess the potential for seismic hazards in this region. To address this problem, we used a large collection of Global Positioning System (GPS) networks which spans the region to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our results can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.
Large Scale Deformation of the Western US Cordillera
NASA Technical Reports Server (NTRS)
Bennett, Richard A.
2001-01-01
Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.
Startup of large-scale projects casts spotlight on IGCC
Swanekamp, R.
1996-06-01
With several large-scale plants cranking up this year, integrated coal gasification/combined cycle (IGCC) appears poised for growth. The technology may eventually help coal reclaim its former prominence in new plant construction, but developers worldwide are eyeing other feedstocks--such as petroleum coke or residual oil. Of the so-called advanced clean-coal technologies, integrated (IGCC) appears to be having a defining year. Of three large-scale demonstration plants in the US, one is well into startup, a second is expected to begin operating in the fall, and a third should startup by the end of the year; worldwide, over a dozen more projects are in the works. In Italy, for example, several large projects using petroleum coke or refinery residues as feedstocks are proceeding, apparently on a project-finance basis.
Considerations of large scale impact and the early Earth
NASA Technical Reports Server (NTRS)
Grieve, R. A. F.; Parmentier, E. M.
1985-01-01
Bodies which have preserved portions of their earliest crust indicate that large scale impact cratering was an important process in early surface and upper crustal evolution. Large impact basins form the basic topographic, tectonic, and stratigraphic framework of the Moon and impact was responsible for the characteristics of the second order gravity field and upper crustal seismic properties. The Earth's crustal evolution during the first 800 my of its history is conjectural. The lack of a very early crust may indicate that thermal and mechanical instabilities resulting from intense mantle convection and/or bombardment inhibited crustal preservation. Whatever the case, the potential effects of large scale impact have to be considered in models of early Earth evolution. Preliminary models of the evolution of a large terrestrial impact basin was derived and discussed in detail.
Bias to CMB lensing measurements from the bispectrum of large-scale structure
NASA Astrophysics Data System (ADS)
Böhm, Vanessa; Schmittfull, Marcel; Sherwin, Blake D.
2016-08-01
The rapidly improving precision of measurements of gravitational lensing of the cosmic microwave background (CMB) also requires a corresponding increase in the precision of theoretical modeling. A commonly made approximation is to model the CMB deflection angle or lensing potential as a Gaussian random field. In this paper, however, we analytically quantify the influence of the non-Gaussianity of large-scale structure (LSS) lenses, arising from nonlinear structure formation, on CMB lensing measurements. In particular, evaluating the impact of the nonzero bispectrum of large-scale structure on the relevant CMB four-point correlation functions, we find that there is a bias to estimates of the CMB lensing power spectrum. For temperature-based lensing reconstruction with CMB stage III and stage IV experiments, we find that this lensing power spectrum bias is negative and is of order 1% of the signal. This corresponds to a shift of multiple standard deviations for these upcoming experiments. We caution, however, that our numerical calculation only evaluates two of the largest bias terms and, thus, only provides an approximate estimate of the full bias. We conclude that further investigation into lensing biases from nonlinear structure formation is required and that these biases should be accounted for in future lensing analyses.
Time-sliced perturbation theory for large scale structure I: general formalism
NASA Astrophysics Data System (ADS)
Blas, Diego; Garny, Mathias; Ivanov, Mikhail M.; Sibiryakov, Sergey
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.
The large-scale anisotropy with the PAMELA calorimeter
NASA Astrophysics Data System (ADS)
Karelin, A.; Adriani, O.; Barbarino, G.; Bazilevskaya, G.; Bellotti, R.; Boezio, M.; Bogomolov, E.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carbone, R.; Carlson, P.; Casolino, M.; Castellini, G.; De Donato, C.; De Santis, C.; De Simone, N.; Di Felice, V.; Formato, V.; Galper, A.; Koldashov, S.; Koldobskiy, S.; Krut'kov, S.; Kvashnin, A.; Leonov, A.; Malakhov, V.; Marcelli, L.; Martucci, M.; Mayorov, A.; Menn, W.; Mergé, M.; Mikhailov, V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Munini, R.; Osteria, G.; Palma, F.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S.; Sarkar, R.; Simon, M.; Scotti, V.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S.; Yurkin, Y.; Zampa, G.; Zampa, N.
2015-10-01
The large-scale anisotropy (or the so-called star-diurnal wave) has been studied using the calorimeter of the space-born experiment PAMELA. The cosmic ray anisotropy has been obtained for the Southern and Northern hemispheres simultaneously in the equatorial coordinate system for the time period 2006-2014. The dipole amplitude and phase have been measured for energies 1-20 TeV n-1.
Report on large scale molten core/magnesia interaction test
Chu, T.Y.; Bentz, J.H.; Arellano, F.E.; Brockmann, J.E.; Field, M.E.; Fish, J.D.
1984-08-01
A molten core/material interaction experiment was performed at the Large-Scale Melt Facility at Sandia National Laboratories. The experiment involved the release of 230 kg of core melt, heated to 2923/sup 0/K, into a magnesia brick crucible. Descriptions of the facility, the melting technology, as well as results of the experiment, are presented. Preliminary evaluations of the results indicate that magnesia brick can be a suitable material for core ladle construction.
Analysis plan for 1985 large-scale tests. Technical report
McMullan, F.W.
1983-01-01
The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.
Simulating Weak Lensing by Large-Scale Structure
NASA Astrophysics Data System (ADS)
Vale, Chris; White, Martin
2003-08-01
We model weak gravitational lensing of light by large-scale structure using ray tracing through N-body simulations. The method is described with particular attention paid to numerical convergence. We investigate some of the key approximations in the multiplane ray-tracing algorithm. Our simulated shear and convergence maps are used to explore how well standard assumptions about weak lensing hold, especially near large peaks in the lensing signal.
The Phoenix series large scale LNG pool fire experiments.
Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.
2010-12-01
The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.
The Large-scale Structure of Scientific Method
NASA Astrophysics Data System (ADS)
Kosso, Peter
2009-01-01
The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of scientific method can reveal the global interconnectedness of scientific knowledge that is an essential part of what makes science scientific.
Space transportation booster engine thrust chamber technology, large scale injector
NASA Technical Reports Server (NTRS)
Schneider, J. A.
1993-01-01
The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.
Large-Scale Weather Disturbances in Mars’ Southern Extratropics
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, Melinda A.
2015-11-01
Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre
Multivariate Clustering of Large-Scale Scientific Simulation Data
Eliassi-Rad, T; Critchlow, T
2003-06-13
Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.
Multivariate Clustering of Large-Scale Simulation Data
Eliassi-Rad, T; Critchlow, T
2003-03-04
Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatiotemporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial space is important since 'similar' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying the threshold f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building a cluster, it is desirable to associate each cluster with its correct spatial space. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.
Onishchenko, O. G.; Horton, W.; Scullion, E.; Fedun, V.
2015-12-15
The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.
Relic vector field and CMB large scale anomalies
Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk
2014-10-01
We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.
NASA Astrophysics Data System (ADS)
Onishchenko, O. G.; Pokhotelov, O. A.; Horton, W.; Scullion, E.; Fedun, V.
2015-12-01
The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.
Turbulent large-scale structure effects on wake meandering
NASA Astrophysics Data System (ADS)
Muller, Y.-A.; Masson, C.; Aubrun, S.
2015-06-01
This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is
A Cloud Computing Platform for Large-Scale Forensic Computing
NASA Astrophysics Data System (ADS)
Roussev, Vassil; Wang, Liqiang; Richard, Golden; Marziale, Lodovico
The timely processing of massive digital forensic collections demands the use of large-scale distributed computing resources and the flexibility to customize the processing performed on the collections. This paper describes MPI MapReduce (MMR), an open implementation of the MapReduce processing model that outperforms traditional forensic computing techniques. MMR provides linear scaling for CPU-intensive processing and super-linear scaling for indexing-related workloads.