Nonlinear large-scale optimization with WORHP
NASA Astrophysics Data System (ADS)
Nikolayzik, Tim; Büskens, Christof; Gerdts, Matthias
Nonlinear optimization has grown to a key technology in many areas of aerospace industry, e.g. satellite control, shape-optimization, aerodynamamics, trajectory planning, reentry prob-lems, interplanetary flights. One of the most extensive areas is the optimization of trajectories for aerospace applications. These problems typically are discretized optimal control problems, which leads to large sparse nonlinear optimization problems. In the end all these different problems from different areas can be described in the general formulation as a nonlinear opti-mization problem. WORHP is designed to solve nonlinear optimization problems with more then one million variables and one million constraints. WORHP uses a lot of different advanced techniques, e.g. reverse communication, to organize the optimization process as efficient and controllable by the user as possible. The solver has nine different interfaces, e.g. to MAT-LAB/SIMULINK and AMPL. Tests of WORHP had shown that WORHP is a very robust and promising solver. Several examples from space applications will be presented.
Large scale nonlinear programming for the optimization of spacecraft trajectories
NASA Astrophysics Data System (ADS)
Arrieta-Camacho, Juan Jose
. Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.
Developing and Understanding Methods for Large-Scale Nonlinear Optimization
2006-07-24
algorithms for large-scale uncon- strained and constrained optimization problems, including limited-memory methods for problems with -2- many thousands...34Published in peer-reviewed journals" E. Eskow, B. Bader, R. Byrd, S. Crivelli, T. Head-Gordon, V. Lamberti and R. Schnabel, "An optimization approach to the
Developing and Understanding Methods for Large Scale Nonlinear Optimization
2001-12-01
development of new algorithms for large-scale uncon- strained and constrained optimization problems, including limited-memory methods for problems with...analysis of tensor and SQP methods for singular con- strained optimization", to appear in SIAM Journal on Optimization. Published in peer-reviewed...Mathematica, Vol III, Journal der Deutschen Mathematiker-Vereinigung, 1998. S. Crivelli, B. Bader, R. Byrd, E. Eskow, V. Lamberti , R.Schnabel and T
On large-scale nonlinear programming techniques for solving optimal control problems
Faco, J.L.D.
1994-12-31
The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
Large-scale structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1983-01-01
Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.
Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.; Boggs, Paul T.; Ray, Jaideep; van Bloemen Waanders, Bart Gustaaf
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This
NASA Astrophysics Data System (ADS)
Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd
2016-11-01
Conjugate gradient (CG) methods are one of the tools in optimization. Due to its low computational memory requirement, this method is used in solving several of nonlinear unconstrained optimization problems from designs, economics, physics and engineering. In this paper, a new modification of CG family coefficient (βk) is proposed and posses global convergence under exact line search direction. Numerical experimental results based on the number of iterations and central processing unit (CPU) time show that the new βk performs better than some other well known CG methods under some standard test functions.
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinate with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.
Robust large-scale parallel nonlinear solvers for simulations.
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write
Large-scale optimization of neuron arbors
NASA Astrophysics Data System (ADS)
Cherniak, Christopher; Changizi, Mark; Won Kang, Du
1999-05-01
At the global as well as local scales, some of the geometry of types of neuron arbors-both dendrites and axons-appears to be self-organizing: Their morphogenesis behaves like flowing water, that is, fluid dynamically; waterflow in branching networks in turn acts like a tree composed of cords under tension, that is, vector mechanically. Branch diameters and angles and junction sites conform significantly to this model. The result is that such neuron tree samples globally minimize their total volume-rather than, for example, surface area or branch length. In addition, the arbors perform well at generating the cheapest topology interconnecting their terminals: their large-scale layouts are among the best of all such possible connecting patterns, approaching 5% of optimum. This model also applies comparably to arterial and river networks.
The workshop on iterative methods for large scale nonlinear problems
Walker, H.F.; Pernice, M.
1995-12-01
The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.
Large-Scale Optimization for Bayesian Inference in Complex Systems
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their
Geospatial Optimization of Siting Large-Scale Solar Projects
Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.
2014-03-01
Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.
Optimal Wind Energy Integration in Large-Scale Electric Grids
NASA Astrophysics Data System (ADS)
Albaijat, Mohammad H.
The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model
Nonlinear Generation of shear flows and large scale magnetic fields by small scale
NASA Astrophysics Data System (ADS)
Aburjania, G.
2009-04-01
EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge
Wang, Xiaolong; Jiang, Aipeng; Jiangzhou, Shu; Li, Ping
2014-01-01
A large-scale parallel-unit seawater reverse osmosis desalination plant contains many reverse osmosis (RO) units. If the operating conditions change, these RO units will not work at the optimal design points which are computed before the plant is built. The operational optimization problem (OOP) of the plant is to find out a scheduling of operation to minimize the total running cost when the change happens. In this paper, the OOP is modelled as a mixed-integer nonlinear programming problem. A two-stage differential evolution algorithm is proposed to solve this OOP. Experimental results show that the proposed method is satisfactory in solution quality. PMID:24701180
Non-linear shrinkage estimation of large-scale structure covariance
NASA Astrophysics Data System (ADS)
Joachimi, Benjamin
2017-03-01
In many astrophysical settings, covariance matrices of large data sets have to be determined empirically from a finite number of mock realizations. The resulting noise degrades inference and precludes it completely if there are fewer realizations than data points. This work applies a recently proposed non-linear shrinkage estimator of covariance to a realistic example from large-scale structure cosmology. After optimizing its performance for the usage in likelihood expressions, the shrinkage estimator yields subdominant bias and variance comparable to that of the standard estimator with a factor of ∼50 less realizations. This is achieved without any prior information on the properties of the data or the structure of the covariance matrix, at a negligible computational cost.
Nonlinear modulation of the HI power spectrum on ultra-large scales. I
Umeh, Obinna; Maartens, Roy; Santos, Mario E-mail: roy.maartens@gmail.com
2016-03-01
Intensity mapping of the neutral hydrogen brightness temperature promises to provide a three-dimensional view of the universe on very large scales. Nonlinear effects are typically thought to alter only the small-scale power, but we show how they may bias the extraction of cosmological information contained in the power spectrum on ultra-large scales. For linear perturbations to remain valid on large scales, we need to renormalize perturbations at higher order. In the case of intensity mapping, the second-order contribution to clustering from weak lensing dominates the nonlinear contribution at high redshift. Renormalization modifies the mean brightness temperature and therefore the evolution bias. It also introduces a term that mimics white noise. These effects may influence forecasting analysis on ultra-large scales.
Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.
Chen, Mou; Tao, Gang
2016-08-01
In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.
NASA Astrophysics Data System (ADS)
Cheng, Wanyou; Xiao, Yunhai; Hu, Qing-Jie
2009-02-01
In this paper, we propose a family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations. They come from two modified conjugate gradient methods [W.Y. Cheng, A two term PRP based descent Method, Numer. Funct. Anal. Optim. 28 (2007) 1217-1230; L. Zhang, W.J. Zhou, D.H. Li, A descent modified Polak-Ribiére-Polyak conjugate gradient method and its global convergence, IMA J. Numer. Anal. 26 (2006) 629-640] recently proposed for unconstrained optimization problems. Under appropriate conditions, the global convergence of the proposed method is established. Preliminary numerical results show that the proposed method is promising.
Large-scale spherical fixed bed reactors: Modeling and optimization
Hartig, F.; Keil, F.J. )
1993-03-01
Iterative dynamic programming (IDP) according to Luus was used for the optimization of the methanol production in a cascade of spherical reactors. The system of three spherical reactors was compared to an externally cooled tubular reactor and a quench reactor. The reactors were modeled by the pseudohomogeneous and heterogeneous approach. The effectiveness factors of the heterogeneous model were calculated by the dusty gas model. The IDP method was compared with sequential quadratic programming (SQP) and the Box complex method. The optimized distributions of catalyst volume with the pseudohomogeneous and heterogeneous model lead to different results. The IDP method finds the global optimum with high probability. A combination of IDP and SQP provides a reliable optimization procedure that needs minimum computing time.
Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design
Liao, Ben-Shan; Bai, Zhaojun; Lee, Lie-Quan; Ko, Kwok; /SLAC
2006-09-28
A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.
LM-CMA: An Alternative to L-BFGS for Large-Scale Black Box Optimization.
Loshchilov, Ilya
2017-01-01
Limited-memory BFGS (L-BFGS; Liu and Nocedal, 1989 ) is often considered to be the method of choice for continuous optimization when first- or second-order information is available. However, the use of L-BFGS can be complicated in a black box scenario where gradient information is not available and therefore should be numerically estimated. The accuracy of this estimation, obtained by finite difference methods, is often problem-dependent and may lead to premature convergence of the algorithm. This article demonstrates an alternative to L-BFGS, the limited memory covariance matrix adaptation evolution strategy (LM-CMA) proposed by Loshchilov ( 2014 ). LM-CMA is a stochastic derivative-free algorithm for numerical optimization of nonlinear, nonconvex optimization problems. Inspired by L-BFGS, LM-CMA samples candidate solutions according to a covariance matrix reproduced from m direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows reducing the memory complexity to [Formula: see text], where n is the number of decision variables. The time complexity of sampling one candidate solution is also [Formula: see text] but scales as only about 25 scalar-vector multiplications in practice. The algorithm has an important property of invariance with respect to strictly increasing transformations of the objective function; such transformations do not compromise its ability to approach the optimum. LM-CMA outperforms the original CMA-ES and its large-scale versions on nonseparable ill-conditioned problems with a factor increasing with problem dimension. Invariance properties of the algorithm do not prevent it from demonstrating a comparable performance to L-BFGS on nontrivial large-scale smooth and nonsmooth optimization problems.
Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization.
Yang, Qiang; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Deng, Jeremiah D; Li, Yun; Zhang, Jun
2016-10-24
Large-scale optimization has become a significant yet challenging area in evolutionary computation. To solve this problem, this paper proposes a novel segment-based predominant learning swarm optimizer (SPLSO) swarm optimizer through letting several predominant particles guide the learning of a particle. First, a segment-based learning strategy is proposed to randomly divide the whole dimensions into segments. During update, variables in different segments are evolved by learning from different exemplars while the ones in the same segment are evolved by the same exemplar. Second, to accelerate search speed and enhance search diversity, a predominant learning strategy is also proposed, which lets several predominant particles guide the update of a particle with each predominant particle responsible for one segment of dimensions. By combining these two learning strategies together, SPLSO evolves all dimensions simultaneously and possesses competitive exploration and exploitation abilities. Extensive experiments are conducted on two large-scale benchmark function sets to investigate the influence of each algorithmic component and comparisons with several state-of-the-art meta-heuristic algorithms dealing with large-scale problems demonstrate the competitive efficiency and effectiveness of the proposed optimizer. Further the scalability of the optimizer to solve problems with dimensionality up to 2000 is also verified.
Adaptive Optimization Techniques for Large-Scale Stochastic Planning
2011-06-28
cannot be kept longer than a few weeks. The decision maker must decide on blood - type substitutions that minimize the chance of future shortage. Because...optimal blood - type substitution is a large stochastic problem. Another application is managing water reservoirs. In this domain, an operator needs to decide...compatibility constraints among blood types , blood inventory management does not fit well the standard inventory control framework. In reservoir management
Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations
NASA Astrophysics Data System (ADS)
Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila
2016-07-01
It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.
Optimization algorithms for large-scale multireservoir hydropower systems
Hiew, K.L.
1987-01-01
Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another. The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.
On the importance of nonlinear couplings in large-scale neutrino streams
Dupuy, Hélène; Bernardeau, Francis E-mail: francis.bernardeau@iap.fr
2015-08-01
We propose a procedure to evaluate the impact of nonlinear couplings on the evolution of massive neutrino streams in the context of large-scale structure growth. Such streams can be described by general nonlinear conservation equations, derived from a multiple-flow perspective, which generalize the conservation equations of non-relativistic pressureless fluids. The relevance of the nonlinear couplings is quantified with the help of the eikonal approximation applied to the subhorizon limit of this system. It highlights the role played by the relative displacements of different cosmic streams and it specifies, for each flow, the spatial scales at which the growth of structure is affected by nonlinear couplings. We found that, at redshift zero, such couplings can be significant for wavenumbers as small as k=0.2 h/Mpc for most of the neutrino streams.
Wildfire Emission, injection height: Development, Optimization, and Large Scale Impact
NASA Astrophysics Data System (ADS)
Paugam, R.; Wooster, M.; Atherton, J.; Beevers, S.; Kitwiroon, N.; Kaiser, J. W.; Remy, S.; Freitas, S. R.
2013-12-01
Evaluation of wildfire emissions in global chemistry transport model is still a subject of debate in the atmospheric community, though some inventory like GFAS and GFED are already available. In particular none of those approaches are currently dealing with height induced by buoyant plumes. In this work we aim to set-up a 3-dimensional wildfire emission inventory. Our approach is based on the Fire Radiative Power product (FRP) evaluated at a cluster level coupled with the plume rise model (PRM) originally developed by Saulo Freitas. PRM was developed to take into account effects of atmospheric stability and latent heat in plume updraft. Here, the original version is modified: (i) the input data of convective heat flux and Active Fire area are directly force from FRP data derived from a modified version of the Dozier algorithm applied to the MOD12 product, (ii) and the dynamical core of the plume model is modified with a new entrainment scheme inspired from latest results in shallow convection parametrization. The new parameters introduced are then defined via an optimization procedure based on (i) fire plume characteristics of single fire events extracted from the official MISR plume height project and (ii) atmospheric profile derived from the ECMWF analysis. Calibration of the new version of PRM is made for Europe and North America. For each geographic zone, fire events are selected out of the MISR data set. In particular, it is shown that the only information extracted from Terra overpass is not enough to guaranty that the injection height of the plume is linked to the FRP measured at the same time. The plume is a dynamical system, and a time delay (related to the atmospheric state) is necessary to adjust change in FRP to the plume behaviour. Therefore, multiple overpasses of the same fire from Terra and Aqua are used here to determine fire and plume behaviours and system in a steady state at the time of MISR (central scan of Terra) overpass are selected for the
NASA Astrophysics Data System (ADS)
Li, Judith Yue; Kokkinaki, Amalia; Ghorbanidehno, Hojat; Darve, Eric F.; Kitanidis, Peter K.
2015-12-01
Reservoir monitoring aims to provide snapshots of reservoir conditions and their uncertainties to assist operation management and risk analysis. These snapshots may contain millions of state variables, e.g., pressures and saturations, which can be estimated by assimilating data in real time using the Kalman filter (KF). However, the KF has a computational cost that scales quadratically with the number of unknowns, m, due to the cost of computing and storing the covariance and Jacobian matrices, along with their products. The compressed state Kalman filter (CSKF) adapts the KF for solving large-scale monitoring problems. The CSKF uses N preselected orthogonal bases to compute an accurate rank-N approximation of the covariance that is close to the optimal spectral approximation given by SVD. The CSKF has a computational cost that scales linearly in m and uses an efficient matrix-free approach that propagates uncertainties using N + 1 forward model evaluations, where N≪m. Here we present a generalized CSKF algorithm for nonlinear state estimation problems such as CO2 monitoring. For simultaneous estimation of multiple types of state variables, the algorithm allows selecting bases that represent the variability of each state type. Through synthetic numerical experiments of CO2 monitoring, we show that the CSKF can reproduce the Kalman gain accurately even for large compression ratios (m/N). For a given computational cost, the CSKF uses a robust and flexible compression scheme that gives more reliable uncertainty estimates than the ensemble Kalman filter, which may display loss of ensemble variability leading to suboptimal uncertainty estimates.
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.
2003-01-01
An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.
An inertia-free filter line-search algorithm for large-scale nonlinear programming
Chiang, Nai-Yuan; Zavala, Victor M.
2016-02-15
We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.
NASA Astrophysics Data System (ADS)
Tong, Shaocheng; Xu, Yinyin; Li, Yongming
2015-06-01
This paper is concerned with the problem of adaptive fuzzy decentralised output-feedback control for a class of uncertain stochastic nonlinear pure-feedback large-scale systems with completely unknown functions, the mismatched interconnections and without requiring the states being available for controller design. With the help of fuzzy logic systems approximating the unknown nonlinear functions, a fuzzy state observer is designed estimating the unmeasured states. Therefore, the nonlinear filtered signals are incorporated into the backstepping recursive design, and an adaptive fuzzy decentralised output-feedback control scheme is developed. It is proved that the filter system converges to a small neighbourhood of the origin based on appropriate choice of the design parameters. Simulation studies are included illustrating the effectiveness of the proposed approach.
Real-time, large scale optimization of water network systems using a subdomain approach.
van Bloemen Waanders, Bart Gustaaf; Biegler, Lorenz T.; Laird, Carl Damon
2005-03-01
Certain classes of dynamic network problems can be modeled by a set of hyperbolic partial differential equations describing behavior along network edges and a set of differential and algebraic equations describing behavior at network nodes. In this paper, we demonstrate real-time performance for optimization problems in drinking water networks. While optimization problems subject to partial differential, differential, and algebraic equations can be solved with a variety of techniques, efficient solutions are difficult for large network problems with many degrees of freedom and variable bounds. Sequential optimization strategies can be inefficient for this problem due to the high cost of computing derivatives with respect to many degrees of freedom. Simultaneous techniques can be more efficient, but are difficult because of the need to solve a large nonlinear program; a program that may be too large for current solver. This study describes a dynamic optimization formulation for estimating contaminant sources in drinking water networks, given concentration measurements at various network nodes. We achieve real-time performance by combining an efficient large-scale nonlinear programming algorithm with two problem reduction techniques. D Alembert's principle can be applied to the partial differential equations governing behavior along the network edges (distribution pipes). This allows us to approximate the time-delay relationships between network nodes, removing the need to discretize along the length of the pipes. The efficiency of this approach alone, however, is still dependent on the size of the network and does not scale indefinitely to larger network models. We further reduce the problem size with a subdomain approach and solve smaller inversion problems using a geographic window around the area of contamination. We illustrate the effectiveness of this overall approach and these reduction techniques on an actual metropolitan water network model.
Carey, G.F.; Young, D.M.
1993-12-31
The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.
Destruction of large-scale magnetic field in non-linear simulations of the shear dynamo
NASA Astrophysics Data System (ADS)
Teed, Robert J.; Proctor, Michael R. E.
2016-05-01
The Sun's magnetic field exhibits coherence in space and time on much larger scales than the turbulent convection that ultimately powers the dynamo. In the past the α-effect (mean-field) concept has been used to model the solar cycle, but recent work has cast doubt on the validity of the mean-field ansatz under solar conditions. This indicates that one should seek an alternative mechanism for generating large-scale structure. One possibility is the recently proposed `shear dynamo' mechanism where large-scale magnetic fields are generated in the presence of a simple shear. Further investigation of this proposition is required, however, because work has been focused on the linear regime with a uniform shear profile thus far. In this paper we report results of the extension of the original shear dynamo model into the non-linear regime. We find that whilst large-scale structure can initially persist into the saturated regime, in several of our simulations it is destroyed via large increase in kinetic energy. This result casts doubt on the ability of the simple uniform shear dynamo mechanism to act as an alternative to the α-effect in solar conditions.
Peloso, Marco; Pietroni, Massimo E-mail: pietroni@pd.infn.it
2013-05-01
We discuss the constraints imposed on the nonlinear evolution of the Large Scale Structure (LSS) of the universe by galilean invariance, the symmetry relevant on subhorizon scales. Using Ward identities associated to the invariance, we derive fully nonlinear consistency relations between statistical correlators of the density and velocity perturbations, such as the power spectrum and the bispectrum. These relations are valid up to O(f{sub NL}{sup 2}) corrections. We then show that most of the semi-analytic methods proposed so far to resum the perturbative expansion of the LSS dynamics fail to fulfill the constraints imposed by galilean invariance, and are therefore susceptible to non-physical infrared effects. Finally, we identify and discuss a nonperturbative semi-analytical scheme which is manifestly galilean invariant at any order of its expansion.
THREE-POINT PHASE CORRELATIONS: A NEW MEASURE OF NONLINEAR LARGE-SCALE STRUCTURE
Wolstenhulme, Richard; Bonvin, Camille; Obreschkow, Danail
2015-05-10
We derive an analytical expression for a novel large-scale structure observable: the line correlation function. The line correlation function, which is constructed from the three-point correlation function of the phase of the density field, is a robust statistical measure allowing the extraction of information in the nonlinear and non-Gaussian regime. We show that, in perturbation theory, the line correlation is sensitive to the coupling kernel F{sub 2}, which governs the nonlinear gravitational evolution of the density field. We compare our analytical expression with results from numerical simulations and find a 1σ agreement for separations r ≳ 30 h{sup −1} Mpc. Fitting formulae for the power spectrum and the nonlinear coupling kernel at small scales allow us to extend our prediction into the strongly nonlinear regime, where we find a 1σ agreement with the simulations for r ≳ 2 h{sup −1} Mpc. We discuss the advantages of the line correlation relative to standard statistical measures like the bispectrum. Unlike the latter, the line correlation is independent of the bias, in the regime where the bias is local and linear. Furthermore, the variance of the line correlation is independent of the Gaussian variance on the modulus of the density field. This suggests that the line correlation can probe more precisely the nonlinear regime of gravity, with less contamination from the power spectrum variance.
Nonlinear Seismic Correlation Analysis of the JNES/NUPEC Large-Scale Piping System Tests.
Nie,J.; DeGrassi, G.; Hofmayer, C.; Ali, S.
2008-06-01
The Japan Nuclear Energy Safety Organization/Nuclear Power Engineering Corporation (JNES/NUPEC) large-scale piping test program has provided valuable new test data on high level seismic elasto-plastic behavior and failure modes for typical nuclear power plant piping systems. The component and piping system tests demonstrated the strain ratcheting behavior that is expected to occur when a pressurized pipe is subjected to cyclic seismic loading. Under a collaboration agreement between the US and Japan on seismic issues, the US Nuclear Regulatory Commission (NRC)/Brookhaven National Laboratory (BNL) performed a correlation analysis of the large-scale piping system tests using derailed state-of-the-art nonlinear finite element models. Techniques are introduced to develop material models that can closely match the test data. The shaking table motions are examined. The analytical results are assessed in terms of the overall system responses and the strain ratcheting behavior at an elbow. The paper concludes with the insights about the accuracy of the analytical methods for use in performance assessments of highly nonlinear piping systems under large seismic motions.
Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.
Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods. PMID:28166542
Fault-Tolerant Tracker for Interconnected Large-Scale Nonlinear Systems with Input Constraint
NASA Astrophysics Data System (ADS)
Shiu, Y. C.; Tsai, J. S. H.; Guo, S. M.; Shieh, L. S.; Han, Z.
This paper presents the decentralized fault-tolerant tracker based on the model predictive control (MPC) for a class of unknown interconnected large-scale sampled-data nonlinear systems. Due to the computational requirements of MPC and the system information is unknown, the observer/Kalman filter identification (OKID) method is utilized to determine decentralized appropriate (low-) order discrete-time linear models. Then, to overcome the effect of modeling error on the identified linear model of each subsystem, the improved observers with the high-gain property based on the digital redesign approach will be presented. Once fault is detected in each decentralized controller, one of the backup control configurations in each decentralized subsystem is switched to using the soft switching approach. Thus, the decentralized fault-tolerant control with the closed-loop decoupling property can be achieved through the above approach with high-gain property decentralized observer/tracker.
NASA Astrophysics Data System (ADS)
Ling, Mei Mei; Leong, Wah June
2014-12-01
In this paper, we make a modification to the standard conjugate gradient method so that its search direction satisfies the sufficient descent condition. We prove that the modified conjugate gradient method is globally convergent under Armijo line search. Numerical results show that the proposed conjugate gradient method is efficient compared to some of its standard counterparts for large-scale unconstrained optimization.
A New Large-Scale Global Optimization Method and Its Application to Lennard-Jones Problems
1992-11-01
stochastic methods. Computational results on Lennard - Jones problems show that the new method is considerably more successful than any other method that...our method does not find as good a solution as has been found by the best special purpose methods for Lennard - Jones problems. This illustrates the inherent difficulty of large scale global optimization.
Li, Yong; Yuan, Gonglin; Wei, Zengxin
2015-01-01
In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method. PMID:25950725
NASA Astrophysics Data System (ADS)
Wang, J.; Cai, X.
2007-12-01
A water resources system can be defined as a large-scale spatial system, within which distributed ecological system interacts with the stream network and ground water system. Water resources management, the causative factors and hence the solutions to be developed have a significant spatial dimension. This motivates a modeling analysis of water resources management within a spatial analytical framework, where data is usually geo- referenced and in the form of a map. One of the important functions of Geographic information systems (GIS) is to identify spatial patterns of environmental variables. The role of spatial patterns in water resources management has been well established in the literature particularly regarding how to design better spatial patterns for satisfying the designated objectives of water resources management. Evolutionary algorithms (EA) have been demonstrated to be successful in solving complex optimization models for water resources management due to its flexibility to incorporate complex simulation models in the optimal search procedure. The idea of combining GIS and EA motivates the development and application of spatial evolutionary algorithms (SEA). SEA assimilates spatial information into EA, and even changes the representation and operators of EA. In an EA used for water resources management, the mathematical optimization model should be modified to account the spatial patterns; however, spatial patterns are usually implicit, and it is difficult to impose appropriate patterns to spatial data. Also it is difficult to express complex spatial patterns by explicit constraints included in the EA. The GIS can help identify the spatial linkages and correlations based on the spatial knowledge of the problem. These linkages are incorporated in the fitness function for the preference of the compatible vegetation distribution. Unlike a regular GA for spatial models, the SEA employs a special hierarchical hyper-population and spatial genetic operators
Test Problems for Large-Scale Multiobjective and Many-Objective Optimization.
Cheng, Ran; Jin, Yaochu; Olhofer, Markus; Sendhoff, Bernhard
2016-08-26
The interests in multiobjective and many-objective optimization have been rapidly increasing in the evolutionary computation community. However, most studies on multiobjective and many-objective optimization are limited to small-scale problems, despite the fact that many real-world multiobjective and many-objective optimization problems may involve a large number of decision variables. As has been evident in the history of evolutionary optimization, the development of evolutionary algorithms (EAs) for solving a particular type of optimization problems has undergone a co-evolution with the development of test problems. To promote the research on large-scale multiobjective and many-objective optimization, we propose a set of generic test problems based on design principles widely used in the literature of multiobjective and many-objective optimization. In order for the test problems to be able to reflect challenges in real-world applications, we consider mixed separability between decision variables and nonuniform correlation between decision variables and objective functions. To assess the proposed test problems, six representative evolutionary multiobjective and many-objective EAs are tested on the proposed test problems. Our empirical results indicate that although the compared algorithms exhibit slightly different capabilities in dealing with the challenges in the test problems, none of them are able to efficiently solve these optimization problems, calling for the need for developing new EAs dedicated to large-scale multiobjective and many-objective optimization.
Optimization of large-scale heterogeneous system-of-systems models.
Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.
2012-01-01
Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.
Friedman, A.
1996-12-01
The summer program in Large Scale Optimization concentrated largely on process engineering, aerospace engineering, inverse problems and optimal design, and molecular structure and protein folding. The program brought together application people, optimizers, and mathematicians with interest in learning about these topics. Three proceedings volumes are being prepared. The year in Materials Sciences deals with disordered media and percolation, phase transformations, composite materials, microstructure; topological and geometric methods as well as statistical mechanics approach to polymers (included were Monte Carlo simulation for polymers); miscellaneous other topics such as nonlinear optical material, particulate flow, and thin film. All these activities saw strong interaction among material scientists, mathematicians, physicists, and engineers. About 8 proceedings volumes are being prepared.
Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.
Long, Lijun; Zhao, Jun
2016-03-08
In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple ``explosion of complexity.'' Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.
A large scale application of an optimal deterministic hydrothermal scheduling algorithm
Carneiro, A.A.F.M.; Soares, S. ); Bond, P.S. )
1990-02-01
This paper presents an application of a deterministic optimization algorithm in the hydrothermal scheduling of the large scale Brazilian south-southeast interconnected system, composed of 51 hydro and 12 thermal plants, corresponding to 45 GW of installed capacity. The application considers the system operational conditions according to the 1986 operational plan coordinated by the Brazilian electric holding company. The employed algorithm is based on a network flow approach especially developed for hydrothermal scheduling. For the south-southeast interconnected system the problem formulation suggests a primal decomposition optimization approach.
Efficient Interpretation of Large-Scale Real Data by Static Inverse Optimization
NASA Astrophysics Data System (ADS)
Zhang, Hong; Ishikawa, Masumi
We have already proposed a methodology for static inverse optimization to interpret real data from a viewpoint of optimization. In this paper we propose a method for efficiently generating constraints by divide-and-conquer to interpret large-scale data by static inverse optimization. It radically decreases computational cost of generating constraints by deleting non-Pareto optimal data from given data. To evaluate the effectiveness of the proposed method, simulation experiments using 3-D artifical data are carried out. As an application to real data, criterion functions underlying decision making of about 5, 000 tenants living along Yamanote line and Soubu-Chuo line in Tokyo are estimated, providing interpretation of rented housing data from a viewpoint of optimization.
NASA Astrophysics Data System (ADS)
Hasegawa, Mikio; Tran, Ha Nguyen; Miyamoto, Goh; Murata, Yoshitoshi; Harada, Hiroshi; Kato, Shuzo
We propose a neurodynamical approach to a large-scale optimization problem in Cognitive Wireless Clouds, in which a huge number of mobile terminals with multiple different air interfaces autonomously utilize the most appropriate infrastructure wireless networks, by sensing available wireless networks, selecting the most appropriate one, and reconfiguring themselves with seamless handover to the target networks. To deal with such a cognitive radio network, game theory has been applied in order to analyze the stability of the dynamical systems consisting of the mobile terminals' distributed behaviors, but it is not a tool for globally optimizing the state of the network. As a natural optimization dynamical system model suitable for large-scale complex systems, we introduce the neural network dynamics which converges to an optimal state since its property is to continually decrease its energy function. In this paper, we apply such neurodynamics to the optimization problem of radio access technology selection. We compose a neural network that solves the problem, and we show that it is possible to improve total average throughput simply by using distributed and autonomous neuron updates on the terminal side.
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Large scale test simulations using the Virtual Environment for Test Optimization (VETO)
Klenke, S.E.; Heffelfinger, S.R.; Bell, H.J.; Shierling, C.L.
1997-10-01
The Virtual Environment for Test Optimization (VETO) is a set of simulation tools under development at Sandia to enable test engineers to do computer simulations of tests. The tool set utilizes analysis codes and test information to optimize design parameters and to provide an accurate model of the test environment which aides in the maximization of test performance, training, and safety. Previous VETO effort has included the development of two structural dynamics simulation modules that provide design and optimization tools for modal and vibration testing. These modules have allowed test engineers to model and simulate complex laboratory testing, to evaluate dynamic response behavior, and to investigate system testability. Further development of the VETO tool set will address the accurate modeling of large scale field test environments at Sandia. These field test environments provide weapon system certification capabilities and have different simulation requirements than those of laboratory testing.
Integration of Large-Scale Optimization and Game Theory for Sustainable Water Quality Management
NASA Astrophysics Data System (ADS)
Tsao, J.; Li, J.; Chou, C.; Tung, C.
2009-12-01
Sustainable water quality management requires total mass control in pollutant discharge based on both the principles of not exceeding assimilative capacity in a river and equity among generations. The stream assimilative capacity is the carrying capacity of a river for the maximum waste load without violating the water quality standard and the spirit of total mass control is to optimize the waste load allocation in subregions. For the goal of sustainable watershed development, this study will use large-scale optimization theory to optimize the profit, and find the marginal values of loadings as reference of the fair price and then the best way to get the equilibrium by water quality trading for the whole of watershed will be found. On the other hand, game theory plays an important role to maximize both individual and entire profits. This study proves the water quality trading market is available in some situation, and also makes the whole participants get a better outcome.
Maximum-entropy large-scale structures of Boolean networks optimized for criticality
NASA Astrophysics Data System (ADS)
Möller, Marco; Peixoto, Tiago P.
2015-04-01
We construct statistical ensembles of modular Boolean networks that are constrained to lie at the critical line between frozen and chaotic dynamic regimes. The ensembles are maximally random given the imposed constraints, and thus represent null models of critical networks. By varying the network density and the entropic cost associated with biased Boolean functions, the ensembles undergo several phase transitions. The observed structures range from fully random to several ordered ones, including a prominent core-periphery-like structure, and an 'attenuated' two-group structure, where the network is divided in two groups of nodes, and one of them has Boolean functions with very low sensitivity. This shows that such simple large-scale structures are the most likely to occur when optimizing for criticality, in the absence of any other constraint or competing optimization criteria.
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft
Rakshit, Sourav; Ananthasuresh, G K
2010-02-07
We present a new computationally efficient method for large-scale polypeptide folding using coarse-grained elastic networks and gradient-based continuous optimization techniques. The folding is governed by minimization of energy based on Miyazawa-Jernigan contact potentials. Using this method we are able to substantially reduce the computation time on ordinary desktop computers for simulation of polypeptide folding starting from a fully unfolded state. We compare our results with available native state structures from Protein Data Bank (PDB) for a few de-novo proteins and two natural proteins, Ubiquitin and Lysozyme. Based on our simulations we are able to draw the energy landscape for a small de-novo protein, Chignolin. We also use two well known protein structure prediction software, MODELLER and GROMACS to compare our results. In the end, we show how a modification of normal elastic network model can lead to higher accuracy and lower time required for simulation.
Asymptotically Optimal Transmission Policies for Large-Scale Low-Power Wireless Sensor Networks
I. Ch. Paschalidis; W. Lai; D. Starobinski
2007-02-01
We consider wireless sensor networks with multiple gateways and multiple classes of traffic carrying data generated by different sensory inputs. The objective is to devise joint routing, power control and transmission scheduling policies in order to gather data in the most efficient manner while respecting the needs of different sensing tasks (fairness). We formulate the problem as maximizing the utility of transmissions subject to explicit fairness constraints and propose an efficient decomposition algorithm drawing upon large-scale decomposition ideas in mathematical programming. We show that our algorithm terminates in a finite number of iterations and produces a policy that is asymptotically optimal at low transmission power levels. Furthermore, we establish that the utility maximization problem we consider can, in principle, be solved in polynomial time. Numerical results show that our policy is near-optimal, even at high power levels, and far superior to the best known heuristics at low power levels. We also demonstrate how to adapt our algorithm to accommodate energy constraints and node failures. The approach we introduce can efficiently determine near-optimal transmission policies for dramatically larger problem instances than an alternative enumeration approach.
NASA Astrophysics Data System (ADS)
Xiao, Jinyou; Zhou, Hang; Zhang, Chuanzeng; Xu, Chao
2016-11-01
This paper focuses on the development and engineering applications of a new resolvent sampling based Rayleigh-Ritz method (RSRR) for solving large-scale nonlinear eigenvalue problems (NEPs) in finite element analysis. There are three contributions. First, to generate reliable eigenspaces the resolvent sampling scheme is derived from Keldysh's theorem for holomorphic matrix functions following a more concise and insightful algebraic framework. Second, based on the new derivation a two-stage solution strategy is proposed for solving large-scale NEPs, which can greatly enhance the computational cost and accuracy of the RSRR. The effects of the user-defined parameters are studied, which provides a useful guide for real applications. Finally, the RSRR and the two-stage scheme is applied to solve two NEPs in the FE analysis of viscoelastic damping structures with up to 1 million degrees of freedom. The method is versatile, robust and suitable for parallelization, and can be easily implemented into other packages.
NASA Astrophysics Data System (ADS)
Xiao, Jinyou; Zhou, Hang; Zhang, Chuanzeng; Xu, Chao
2017-02-01
This paper focuses on the development and engineering applications of a new resolvent sampling based Rayleigh-Ritz method (RSRR) for solving large-scale nonlinear eigenvalue problems (NEPs) in finite element analysis. There are three contributions. First, to generate reliable eigenspaces the resolvent sampling scheme is derived from Keldysh's theorem for holomorphic matrix functions following a more concise and insightful algebraic framework. Second, based on the new derivation a two-stage solution strategy is proposed for solving large-scale NEPs, which can greatly enhance the computational cost and accuracy of the RSRR. The effects of the user-defined parameters are studied, which provides a useful guide for real applications. Finally, the RSRR and the two-stage scheme is applied to solve two NEPs in the FE analysis of viscoelastic damping structures with up to 1 million degrees of freedom. The method is versatile, robust and suitable for parallelization, and can be easily implemented into other packages.
a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks
NASA Astrophysics Data System (ADS)
Bottacin-Busolin, A.; Worman, A. L.
2013-12-01
A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used asmore » test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.« less
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, M_{N}, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P_{N}, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M_{N} algorithm that do not appear for the P_{N} algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M_{N} to P_{N} decreases.
Optimization and large scale computation of an entropy-based moment closure
NASA Astrophysics Data System (ADS)
Kristopher Garrett, C.; Hauck, Cory; Hill, Judith
2015-12-01
We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.
Huang, Zhipeng; Wang, Ruxue; Jia, Ding; Maoying, Li; Humphrey, Mark G; Zhang, Chi
2012-03-01
A facile method for the low-cost and large-scale production of silicon nanowires has been developed. Silicon powders were subjected to sequential metal plating and metal-assisted chemical etching, resulting in well-defined silicon nanowires. The morphology and structure of the silicon nanowires were investigated, revealing that single-crystal silicon nanowires with average diameters of 79 ± 35 nm and length more than 10 μm can be fabricated. The silicon nanowires show excellent third-order nonlinear optical properties, with a third-order susceptibility much larger than that of bulk silicon, porous silicon, and silicon nanocrystals embedded in SiO(2).
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
Characterizing the nonlinear growth of large-scale structure in the Universe
Coles; Chiang
2000-07-27
The local Universe displays a rich hierarchical pattern of galaxy clusters and superclusters. The early Universe, however, was almost smooth, with only slight 'ripples' as seen in the cosmic microwave background radiation. Models of the evolution of cosmic structure link these observations through the effect of gravity, because the small initially overdense fluctuations are predicted to attract additional mass as the Universe expands. During the early stages of this expansion, the ripples evolve independently, like linear waves on the surface of deep water. As the structures grow in mass, they interact with each other in nonlinear ways, more like waves breaking in shallow water. We have recently shown how cosmic structure can be characterized by phase correlations associated with these nonlinear interactions, but it was not clear how to use that information to obtain quantitative insights into the growth of structures. Here we report a method of revealing phase information, and show quantitatively how this relates to the formation of filaments, sheets and clusters of galaxies by nonlinear collapse. We develop a statistical method based on information entropy to separate linear from nonlinear effects, and thereby are able to disentangle those aspects of galaxy clustering that arise from initial conditions (the ripples) from the subsequent dynamical evolution.
NASA Astrophysics Data System (ADS)
Kitaura, F. S.; Enßlin, T. A.
2008-09-01
We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribière and Hestenes-Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel ARGO software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.
Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks
Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.
2010-01-01
Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster headsmore » to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less
Robust nonlinear controller design to improve the stability of a large scale photovoltaic system
NASA Astrophysics Data System (ADS)
Islam, Gazi Md. Saeedul
Recently interest in photovoltaic (PV) power generation systems is increasing rapidly and the installation of large PV systems or large groups of PV systems that are interconnected with the utility grid is accelerating despite their high cost and low efficiency due to environmental issues and depletions of fossil fuels. Most of the photovoltaic (PV) applications are grid connected. Existing power systems may face the stability problems because of the high penetration of PV systems to the grid. Therefore, more stringent grid codes are being imposed by the energy regulatory bodies for grid integration of PV plants. Recent grid codes dictate that PV plants need to stay connected with the power grid during the network faults because of their increased power penetration level. This requires the system to have large disturbance rejection capability to protect the system and provide dynamic grid support. This thesis presents a new control method to enhance the steady-state and transient stabilities of a grid connected large scale photovoltaic (PV) system. A new control coordination scheme is also presented to reduce the power mismatch during the fault condition in order to limit the fault currents, which is one of the salient features of this study. The performance of the overall system is analyzed using laboratory standard power system simulation software PSCAD/EMTDC.
The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.
Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie
2016-01-01
In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).
The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization
Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie
2016-01-01
In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables). PMID:27780245
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2016-06-20
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
Design optimization studies for large-scale contoured beam deployable satellite antennas
NASA Astrophysics Data System (ADS)
Tanaka, Hiroaki
2006-05-01
Satellite communications systems over the past two decades have become more sophisticated and evolved new applications that require much higher flux densities. These new requirements to provide high data rate services to very small user terminals have in turn led to the need for large aperture space antenna systems with higher gain. Conventional parabolic reflectors constructed of metal have become, over time, too massive to support these new missions in a cost effective manner and also have posed problems of fitting within the constrained volume of launch vehicles. Designers of new space antenna systems have thus begun to explore new design options. These design options for advanced space communications networks include such alternatives as inflatable antennas using polyimide materials, antennas constructed of piezo-electric materials, phased array antenna systems (especially in the EHF bands) and deployable antenna systems constructed of wire mesh or cabling systems. This article updates studies being conducted in Japan of such deployable space antenna systems [H. Tanaka, M.C. Natori, Shape control of space antennas consisting of cable networks, Acta Astronautica 55 (2004) 519-527]. In particular, this study shows how the design of such large-scale deployable antenna systems can be optimized based on various factors including the frequency bands to be employed with such innovative reflector design. In particular, this study investigates how contoured beam space antennas can be effective by constructed out of so-called cable networks or mesh-like reflectors. This design can be accomplished via "plane wave synthesis" and by the "force density method" and then to iterate the design to achieve the optimum solution. We have concluded that the best design is achieved by plane wave synthesis. Further, we demonstrate that the nodes on the reflector are best determined by a pseudo-inverse calculation of the matrix that can be interpolated so as to achieve the minimum
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations.
2014-05-01
global convergence and further show its linear convergence under a variety of scenarios, which cover a wide range of applications . The derived rate of...global convergence and further show its linear convergence under a variety of scenarios, which cover a wide range of applications . The derived rate of...efficiency, flexibility and applicability for large-scale and distributed op- timization problems. We also make important extensions to the convergence
Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control
NASA Astrophysics Data System (ADS)
Kamyar, Reza
In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to
NASA Astrophysics Data System (ADS)
Nikitenkova, S.; Singh, N.; Stepanyants, Y.
2015-12-01
In this paper, we revisit the problem of modulation stability of quasi-monochromatic wave-trains propagating in a media with the double dispersion occurring both at small and large wavenumbers. We start with the shallow-water equations derived by Shrira [Izv., Acad. Sci., USSR, Atmos. Ocean. Phys. (Engl. Transl.) 17, 55-59 (1981)] which describes both surface and internal long waves in a rotating fluid. The small-scale (Boussinesq-type) dispersion is assumed to be weak, whereas the large-scale (Coriolis-type) dispersion is considered as without any restriction. For unidirectional waves propagating in one direction, only the considered set of equations reduces to the Gardner-Ostrovsky equation which is applicable only within a finite range of wavenumbers. We derive the nonlinear Schrödinger equation (NLSE) which describes the evolution of narrow-band wave-trains and show that within a more general bi-directional equation the wave-trains, similar to that derived from the Ostrovsky equation, are also modulationally stable at relatively small wavenumbers k < kc and unstable at k > kc, where kc is some critical wavenumber. The NLSE derived here has a wider range of applicability: it is valid for arbitrarily small wavenumbers. We present the analysis of coefficients of the NLSE for different signs of coefficients of the governing equation and compare them with those derived from the Ostrovsky equation. The analysis shows that for weakly dispersive waves in the range of parameters where the Gardner-Ostrovsky equation is valid, the cubic nonlinearity does not contribute to the nonlinear coefficient of NLSE; therefore, the NLSE can be correctly derived from the Ostrovsky equation.
Nikitenkova, S; Singh, N; Stepanyants, Y
2015-12-01
In this paper, we revisit the problem of modulation stability of quasi-monochromatic wave-trains propagating in a media with the double dispersion occurring both at small and large wavenumbers. We start with the shallow-water equations derived by Shrira [Izv., Acad. Sci., USSR, Atmos. Ocean. Phys. (Engl. Transl.) 17, 55-59 (1981)] which describes both surface and internal long waves in a rotating fluid. The small-scale (Boussinesq-type) dispersion is assumed to be weak, whereas the large-scale (Coriolis-type) dispersion is considered as without any restriction. For unidirectional waves propagating in one direction, only the considered set of equations reduces to the Gardner-Ostrovsky equation which is applicable only within a finite range of wavenumbers. We derive the nonlinear Schrödinger equation (NLSE) which describes the evolution of narrow-band wave-trains and show that within a more general bi-directional equation the wave-trains, similar to that derived from the Ostrovsky equation, are also modulationally stable at relatively small wavenumbers k < kc and unstable at k > kc, where kc is some critical wavenumber. The NLSE derived here has a wider range of applicability: it is valid for arbitrarily small wavenumbers. We present the analysis of coefficients of the NLSE for different signs of coefficients of the governing equation and compare them with those derived from the Ostrovsky equation. The analysis shows that for weakly dispersive waves in the range of parameters where the Gardner-Ostrovsky equation is valid, the cubic nonlinearity does not contribute to the nonlinear coefficient of NLSE; therefore, the NLSE can be correctly derived from the Ostrovsky equation.
Improved tomographic reconstruction of large-scale real-world data by filter optimization.
Pelt, Daniël M; De Andrade, Vincent
2017-01-01
In advanced tomographic experiments, large detector sizes and large numbers of acquired datasets can make it difficult to process the data in a reasonable time. At the same time, the acquired projections are often limited in some way, for example having a low number of projections or a low signal-to-noise ratio. Direct analytical reconstruction methods are able to produce reconstructions in very little time, even for large-scale data, but the quality of these reconstructions can be insufficient for further analysis in cases with limited data. Iterative reconstruction methods typically produce more accurate reconstructions, but take significantly more time to compute, which limits their usefulness in practice. In this paper, we present the application of the SIRT-FBP method to large-scale real-world tomographic data. The SIRT-FBP method is able to accurately approximate the simultaneous iterative reconstruction technique (SIRT) method by the computationally efficient filtered backprojection (FBP) method, using precomputed experiment-specific filters. We specifically focus on the many implementation details that are important for application on large-scale real-world data, and give solutions to common problems that occur with experimental data. We show that SIRT-FBP filters can be computed in reasonable time, even for large problem sizes, and that precomputed filters can be reused for future experiments. Reconstruction results are given for three different experiments, and are compared with results of popular existing methods. The results show that the SIRT-FBP method is able to accurately approximate iterative reconstructions of experimental data. Furthermore, they show that, in practice, the SIRT-FBP method can produce more accurate reconstructions than standard direct analytical reconstructions with popular filters, without increasing the required computation time.
Chen, Hanbo; Liu, Tao; Zhao, Yu; Zhang, Tuo; Li, Yujie; Li, Meng; Zhang, Hongmiao; Kuang, Hui; Guo, Lei; Tsien, Joe Z; Liu, Tianming
2015-07-15
Tractography based on diffusion tensor imaging (DTI) data has been used as a tool by a large number of recent studies to investigate structural connectome. Despite its great success in offering unique 3D neuroanatomy information, DTI is an indirect observation with limited resolution and accuracy and its reliability is still unclear. Thus, it is essential to answer this fundamental question: how reliable is DTI tractography in constructing large-scale connectome? To answer this question, we employed neuron tracing data of 1772 experiments on the mouse brain released by the Allen Mouse Brain Connectivity Atlas (AMCA) as the ground-truth to assess the performance of DTI tractography in inferring white matter fiber pathways and inter-regional connections. For the first time in the neuroimaging field, the performance of whole brain DTI tractography in constructing a large-scale connectome has been evaluated by comparison with tracing data. Our results suggested that only with the optimized tractography parameters and the appropriate scale of brain parcellation scheme, can DTI produce relatively reliable fiber pathways and a large-scale connectome. Meanwhile, a considerable amount of errors were also identified in optimized DTI tractography results, which we believe could be potentially alleviated by efforts in developing better DTI tractography approaches. In this scenario, our framework could serve as a reliable and quantitative test bed to identify errors in tractography results which will facilitate the development of such novel tractography algorithms and the selection of optimal parameters.
NASA Astrophysics Data System (ADS)
Shi, Huaitao; Liu, Jianchang; Wu, Yuhou; Zhang, Ke; Zhang, Lixiu; Xue, Peng
2016-04-01
It is pretty significant for fault diagnosis timely and accurately to improve the dependability of industrial processes. In this study, fault diagnosis of nonlinear and large-scale processes by variable-weighted kernel Fisher discriminant analysis (KFDA) based on improved biogeography-based optimisation (IBBO) is proposed, referred to as IBBO-KFDA, where IBBO is used to determine the parameters of variable-weighted KFDA, and variable-weighted KFDA is used to solve the multi-classification overlapping problem. The main contributions of this work are four-fold to further improve the performance of KFDA for fault diagnosis. First, a nonlinear fault diagnosis approach with variable-weighted KFDA is developed for maximising separation between the overlapping fault samples. Second, kernel parameters and features selection of variable-weighted KFDA are simultaneously optimised using IBBO. Finally, a single fitness function that combines erroneous diagnosis rate with feature cost is created, a novel mixed kernel function is introduced to improve the classification capability in the feature space and diagnosis accuracy of the IBBO-KFDA, and serves as the target function in the optimisation problem. Moreover, an IBBO approach is developed to obtain the better quality of solution and faster convergence speed. On the one hand, the proposed IBBO-KFDA method is first used on Tennessee Eastman process benchmark data sets to validate the feasibility and efficiency. On the other hand, IBBO-KFDA is applied to diagnose faults of automation gauge control system. Simulation results demonstrate that IBBO-KFDA can obtain better kernel parameters and feature vectors with a lower computing cost, higher diagnosis accuracy and a better real-time capacity.
NASA Astrophysics Data System (ADS)
Wierschem, Nicholas E.; Hubbard, Sean A.; Luo, Jie; Fahnestock, Larry A.; Spencer, Billie F.; McFarland, D. Michael; Quinn, D. Dane; Vakakis, Alexander F.; Bergman, Lawrence A.
2017-02-01
Limiting peak stresses and strains in a structure subjected to high-energy, short-duration transient loadings, such as blasts, is a challenging problem, largely due to the well-known insensitivity of the first few cycles of the structural response to damping. Linear isolation, while a potential solution, requires a very low fundamental natural frequency to be effective, resulting in large nearly-rigid body displacement of the structure, while linear vibration absorbers have little or no effect on the early-time response where relative motions, and thus stresses and strains, are at their highest levels. The problem has become increasingly important in recent years with the expectation of blast-resistance as a design requirement in new construction. In this paper, the problem is examined experimentally and computationally in the context of offset-blast loading applied to a custom-built nine story steel frame structure. A fully-passive response mitigation system consisting of six lightweight, essentially nonlinear vibration absorbers (termed nonlinear energy sinks - NESs) is optimized and deployed on the upper two floors of this structure. Two NESs have vibro-impact nonlinearities and the other four possess smooth but essentially nonlinear stiffnesses. Results of the computational and experimental study demonstrate the efficacy of the proposed passive nonlinear mitigation system to rapidly and efficiently attenuate the global structural response, even at early time (i.e., starting at the first response cycle), thus minimizing the peak demand on the structure. This is achieved by nonlinear redistribution of the blast energy within the modal space through low-to-high energy scattering due to the action of the NESs. The experimental results validate the theoretical predictions.
NASA Astrophysics Data System (ADS)
Bechtold, M.; Tiemeyer, B.; Laggner, A.; Leppelt, T.; Frahm, E.; Belting, S.
2014-04-01
Fluxes of the three main greenhouse gases (GHG) CO2, CH4 and N2O from peat and other organic soils are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new dataset comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip well specific long-term annual mean water level (WL) as well as a transformed form of it (WLt). The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insights into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and that predictors with
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
NASA Astrophysics Data System (ADS)
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Large-Scale Multi-Objective Optimization for the Management of Seawater Intrusion, Santa Barbara, CA
NASA Astrophysics Data System (ADS)
Stanko, Z. P.; Nishikawa, T.; Paulinski, S. R.
2015-12-01
The City of Santa Barbara, located in coastal southern California, is concerned that excessive groundwater pumping will lead to chloride (Cl) contamination of its groundwater system from seawater intrusion (SWI). In addition, the city wishes to estimate the effect of continued pumping on the groundwater basin under a variety of initial and climatic conditions. A SEAWAT-based groundwater-flow and solute-transport model of the Santa Barbara groundwater basin was optimized to produce optimal pumping schedules assuming 5 different scenarios. Borg, a multi-objective genetic algorithm, was coupled with the SEAWAT model to identify optimal management strategies. The optimization problems were formulated as multi-objective so that the tradeoffs between maximizing pumping, minimizing SWI, and minimizing drawdowns can be examined by the city. Decisions can then be made on a pumping schedule in light of current preferences and climatic conditions. Borg was used to produce Pareto optimal results for all 5 scenarios, which vary in their initial conditions (high water levels, low water levels, or current basin state), simulated climate (normal or drought conditions), and problem formulation (objective equations and decision-variable aggregation). Results show mostly well-defined Pareto surfaces with a few singularities. Furthermore, the results identify the precise pumping schedule per well that was suitable given the desired restriction on drawdown and Cl concentrations. A system of decision-making is then possible based on various observations of the basin's hydrologic states and climatic trends without having to run any further optimizations. In addition, an assessment of selected Pareto-optimal solutions was analyzed with sensitivity information using the simulation model alone. A wide range of possible groundwater pumping scenarios is available and depends heavily on the future climate scenarios and the Pareto-optimal solution selected while managing the pumping wells.
NASA Astrophysics Data System (ADS)
Bechtold, M.; Tiemeyer, B.; Laggner, A.; Leppelt, T.; Frahm, E.; Belting, S.
2014-09-01
Fluxes of the three main greenhouse gases (GHG) CO2, CH4 and N2O from peat and other soils with high organic carbon contents are strongly controlled by water table depth. Information about the spatial distribution of water level is thus a crucial input parameter when upscaling GHG emissions to large scales. Here, we investigate the potential of statistical modeling for the regionalization of water levels in organic soils when data covers only a small fraction of the peatlands of the final map. Our study area is Germany. Phreatic water level data from 53 peatlands in Germany were compiled in a new data set comprising 1094 dip wells and 7155 years of data. For each dip well, numerous possible predictor variables were determined using nationally available data sources, which included information about land cover, ditch network, protected areas, topography, peatland characteristics and climatic boundary conditions. We applied boosted regression trees to identify dependencies between predictor variables and dip-well-specific long-term annual mean water level (WL) as well as a transformed form (WLt). The latter was obtained by assuming a hypothetical GHG transfer function and is linearly related to GHG emissions. Our results demonstrate that model calibration on WLt is superior. It increases the explained variance of the water level in the sensitive range for GHG emissions and avoids model bias in subsequent GHG upscaling. The final model explained 45% of WLt variance and was built on nine predictor variables that are based on information about land cover, peatland characteristics, drainage network, topography and climatic boundary conditions. Their individual effects on WLt and the observed parameter interactions provide insight into natural and anthropogenic boundary conditions that control water levels in organic soils. Our study also demonstrates that a large fraction of the observed WLt variance cannot be explained by nationally available predictor variables and
Optimization of a Large-scale Microseismic Monitoring Network in Northern Switzerland
NASA Astrophysics Data System (ADS)
Kraft, T.; Husen, S.; Mignan, A.; Bethmann, F.
2011-12-01
We have performed a computer aided network optimization for a regional scale microseismic network in northeastern Switzerland. The goal of the optimization was to find the geometry and size of the network that assures a location precision of 0.5 km in the epicenter and 2.0 km in focal depth for earthquakes of magnitude ML>= 1.0, by taking into account 67 existing stations in Switzerland, Germany and Austria, and the expected detectability of Ml 1 earthquakes in the study area. The optimization was based on the simulated annealing approach by Hardt and Scherbaum (1993), that aims to minimize the volume of the error ellipsoid of the linearized earthquake location problem (D-criterion). We have extended their algorithm: to calculate traveltimes of seismic body waves using a finite differences raytracer and the three-dimensional velocity model of Switzerland, to calculate seismic body waves amplitudes at arbitrary stations assuming Brune source model and using scaling relations recently derived for Switzerland, and to estimate the noise level at arbitrary locations within Switzerland using a first order ambient seismic noise model based on 14 land-use classes defined by the EU-project CORINE and open GIS data. Considering 67 existing stations in Switzerland, Germany and Austria, optimizations for networks of 10 to 35 new stations were calculated with respect to 2240 synthetic earthquakes of magnitudes between ML=0.8-1.1. We incorporated the case of non-detections by considering only earthquake-station pairs with an expected signal-to-noise ratio larger than 10 for the considered body wave. Station noise levels were derived from measured ground motion for existing stations and from the first order ambient noise model for new sites. The stability of the optimization result was tested by repeated optimization runs with changing initial conditions. Due to the highly non linear nature and size of the problem, station locations in the individual solutions show small
Model-Constrained Optimization Methods for Reduction of Parameterized Large-Scale Systems
2007-05-01
colorful with his stereo karaoke system. Anh Hai, thanks for helping me move my furnitures many times, and for all the beers too! To all Vietnamese...visit them. My trips to Springfield would have been very boring if Anh Tung (+ Thao) and Anh Danh (+ Thuy) had not turn on their super stereo karaoke ...expensive to solve, e.g. for applications such as optimal design or probabilistic analyses. Model order reduction is a powerful tool that permits the
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2016-07-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
2016-07-26
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
2016-07-26
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, used for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.
Optimization of Nanoparticle-Based SERS Substrates through Large-Scale Realistic Simulations.
Solís, Diego M; Taboada, José M; Obelleiro, Fernando; Liz-Marzán, Luis M; García de Abajo, F Javier
2017-02-15
Surface-enhanced Raman scattering (SERS) has become a widely used spectroscopic technique for chemical identification, providing unbeaten sensitivity down to the single-molecule level. The amplification of the optical near field produced by collective electron excitations -plasmons- in nanostructured metal surfaces gives rise to a dramatic increase by many orders of magnitude in the Raman scattering intensities from neighboring molecules. This effect strongly depends on the detailed geometry and composition of the plasmon-supporting metallic structures. However, the search for optimized SERS substrates has largely relied on empirical data, due in part to the complexity of the structures, whose simulation becomes prohibitively demanding. In this work, we use state-of-the-art electromagnetic computation techniques to produce predictive simulations for a wide range of nanoparticle-based SERS substrates, including realistic configurations consisting of random arrangements of hundreds of nanoparticles with various morphologies. This allows us to derive rules of thumb for the influence of particle anisotropy and substrate coverage on the obtained SERS enhancement and optimum spectral ranges of operation. Our results provide a solid background to understand and design optimized SERS substrates.
Optimization of Nanoparticle-Based SERS Substrates through Large-Scale Realistic Simulations
2016-01-01
Surface-enhanced Raman scattering (SERS) has become a widely used spectroscopic technique for chemical identification, providing unbeaten sensitivity down to the single-molecule level. The amplification of the optical near field produced by collective electron excitations —plasmons— in nanostructured metal surfaces gives rise to a dramatic increase by many orders of magnitude in the Raman scattering intensities from neighboring molecules. This effect strongly depends on the detailed geometry and composition of the plasmon-supporting metallic structures. However, the search for optimized SERS substrates has largely relied on empirical data, due in part to the complexity of the structures, whose simulation becomes prohibitively demanding. In this work, we use state-of-the-art electromagnetic computation techniques to produce predictive simulations for a wide range of nanoparticle-based SERS substrates, including realistic configurations consisting of random arrangements of hundreds of nanoparticles with various morphologies. This allows us to derive rules of thumb for the influence of particle anisotropy and substrate coverage on the obtained SERS enhancement and optimum spectral ranges of operation. Our results provide a solid background to understand and design optimized SERS substrates. PMID:28239616
An optimization approach for large scale simulations of discrete fracture network flows
NASA Astrophysics Data System (ADS)
Berrone, Stefano; Pieraccini, Sandra; Scialò, Stefano
2014-01-01
In recent papers [1,2] the authors introduced a new method for simulating subsurface flow in a system of fractures based on a PDE-constrained optimization reformulation, removing all difficulties related to mesh generation and providing an easily parallel approach to the problem. In this paper we further improve the method removing the constraint of having on each fracture a non-empty portion of the boundary with Dirichlet boundary conditions. This way, Dirichlet boundary conditions are prescribed only on a possibly small portion of DFN boundary. The proposed generalization of the method in [1,2] relies on a modified definition of control variables ensuring the non-singularity of the operator on each fracture. A conjugate gradient method is also introduced in order to speed up the minimization process.
Sdika, Michaël
2008-02-01
This paper presents a new nonrigid monomodality image registration algorithm based on B-splines. The deformation is described by a cubic B-spline field and found by minimizing the energy between a reference image and a deformed version of a floating image. To penalize noninvertible transformation, we propose two different constraints on the Jacobian of the transformation and its derivatives. The problem is modeled by an inequality constrained optimization problem which is efficiently solved by a combination of the multipliers method and the L-BFGS algorithm to handle the large number of variables and constraints of the registration of 3-D images. Numerical experiments are presented on magnetic resonance images using synthetic deformations and atlas based segmentation.
Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.
Haber, Aleksandar; Verhaegen, Michel
2016-11-15
We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.
SWAP-Assembler 2: Optimization of De Novo Genome Assembler at Large Scale
Meng, Jintao; Seo, Sangmin; Balaji, Pavan; Wei, Yanjie; Wang, Bingqiang; Feng, Shengzhong
2016-01-01
In this paper, we analyze and optimize the most time-consuming steps of the SWAP-Assembler, a parallel genome assembler, so that it can scale to a large number of cores for huge genomes with the size of sequencing data ranging from terabyes to petabytes. According to the performance analysis results, the most time-consuming steps are input parallelization, k-mer graph construction, and graph simplification (edge merging). For the input parallelization, the input data is divided into virtual fragments with nearly equal size, and the start position and end position of each fragment are automatically separated at the beginning of the reads. In k-mer graph construction, in order to improve the communication efficiency, the message size is kept constant between any two processes by proportionally increasing the number of nucleotides to the number of processes in the input parallelization step for each round. The memory usage is also decreased because only a small part of the input data is processed in each round. With graph simplification, the communication protocol reduces the number of communication loops from four to two loops and decreases the idle communication time. The optimized assembler is denoted as SWAP-Assembler 2 (SWAP2). In our experiments using a 1000 Genomes project dataset of 4 terabytes (the largest dataset ever used for assembling) on the supercomputer Mira, the results show that SWAP2 scales to 131,072 cores with an efficiency of 40%. We also compared our work with both the HipMER assembler and the SWAP-Assembler. On the Yanhuang dataset of 300 gigabytes, SWAP2 shows a 3X speedup and 4X better scalability compared with the HipMer assembler and is 45 times faster than the SWAP-Assembler. The SWAP2 software is available at https://sourceforge.net/projects/swapassembler.
Weighted modularity optimization for crisp and fuzzy community detection in large-scale networks
NASA Astrophysics Data System (ADS)
Cao, Jie; Bu, Zhan; Gao, Guangliang; Tao, Haicheng
2016-11-01
Community detection is a classic and very difficult task in the field of complex network analysis, principally for its applications in domains such as social or biological networks analysis. One of the most widely used technologies for community detection in networks is the maximization of the quality function known as modularity. However, existing work has proved that modularity maximization algorithms for community detection may fail to resolve communities in small size. Here we present a new community detection method, which is able to find crisp and fuzzy communities in undirected and unweighted networks by maximizing weighted modularity. The algorithm derives new edge weights using the cosine similarity in order to go around the resolution limit problem. Then a new local moving heuristic based on weighted modularity optimization is proposed to cluster the updated network. Finally, the set of potentially attractive clusters for each node is computed, to further uncover the crisply fuzzy partition of the network. We give demonstrative applications of the algorithm to a set of synthetic benchmark networks and six real-world networks and find that it outperforms the current state of the art proposals (even those aimed at finding overlapping communities) in terms of quality and scalability.
Optimization of culture media for large-scale lutein production by heterotrophic Chlorella vulgaris.
Jeon, Jin Young; Kwon, Ji-Sue; Kang, Soon Tae; Kim, Bo-Ra; Jung, Yuchul; Han, Jae Gap; Park, Joon Hyun; Hwang, Jae Kwan
2014-01-01
Lutein is a carotenoid with a purported role in protecting eyes from oxidative stress, particularly the high-energy photons of blue light. Statistical optimization was performed to growth media that supports a higher production of lutein by heterotrophically cultivated Chlorella vulgaris. The effect of media composition of C. vulgaris on lutein was examined using fractional factorial design (FFD) and central composite design (CCD). The results indicated that the presence of magnesium sulfate, EDTA-2Na, and trace metal solution significantly affected lutein production. The optimum concentrations for lutein production were found to be 0.34 g/L, 0.06 g/L, and 0.4 mL/L for MgSO4 ·7H2 O, EDTA-2Na, and trace metal solution, respectively. These values were validated using a 5-L jar fermenter. Lutein concentration was increased by almost 80% (139.64 ± 12.88 mg/L to 252.75 ± 12.92 mg/L) after 4 days. Moreover, the lutein concentration was not reduced as the cultivation was scaled up to 25,000 L (260.55 ± 3.23 mg/L) and 240,000 L (263.13 ± 2.72 mg/L). These observations suggest C. vulgaris as a potential lutein source.
Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models
NASA Astrophysics Data System (ADS)
Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.
2012-12-01
The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network
2017-01-01
Large-scale metabolic profiling requires the development of novel economical high-throughput analytical methods to facilitate characterization of systemic metabolic variation in population phenotypes. We report a fit-for-purpose direct infusion nanoelectrospray high-resolution mass spectrometry (DI-nESI-HRMS) method with time-of-flight detection for rapid targeted parallel analysis of over 40 urinary metabolites. The newly developed 2 min infusion method requires <10 μL of urine sample and generates high-resolution MS profiles in both positive and negative polarities, enabling further data mining and relative quantification of hundreds of metabolites. Here we present optimization of the DI-nESI-HRMS method in a detailed step-by-step guide and provide a workflow with rigorous quality assessment for large-scale studies. We demonstrate for the first time the application of the method for urinary metabolic profiling in human epidemiological investigations. Implementation of the presented DI-nESI-HRMS method enabled cost-efficient analysis of >10 000 24 h urine samples from the INTERMAP study in 12 weeks and >2200 spot urine samples from the ARIC study in <3 weeks with the required sensitivity and accuracy. We illustrate the application of the technique by characterizing the differences in metabolic phenotypes of the USA and Japanese population from the INTERMAP study. PMID:28245357
Kang, Chao; Wen, Ting-Chi; Kang, Ji-Chuan; Meng, Ze-Bing; Li, Guang-Rong; Hyde, Kevin D.
2014-01-01
Cordycepin is one of the most important bioactive compounds produced by species of Cordyceps sensu lato, but it is hard to produce large amounts of this substance in industrial production. In this work, single factor design, Plackett-Burman design, and central composite design were employed to establish the key factors and identify optimal culture conditions which improved cordycepin production. Using these culture conditions, a maximum production of cordycepin was 2008.48 mg/L for 700 mL working volume in the 1000 mL glass jars and total content of cordycepin reached 1405.94 mg/bottle. This method provides an effective way for increasing the cordycepin production at a large scale. The strategies used in this study could have a wide application in other fermentation processes. PMID:25054182
NASA Astrophysics Data System (ADS)
Weitnauer, C.; Beck, C.; Jacobeit, J.
2013-12-01
In the last decades the critical increase of the emission of air pollutants like nitrogen dioxide, sulfur oxides and particulate matter especially in urban areas has become a problem for the environment as well as human health. Several studies confirm a risk of high concentration episodes of particulate matter with an aerodynamic diameter < 10 μm (PM10) for the respiratory tract or cardiovascular diseases. Furthermore it is known that local meteorological and large scale atmospheric conditions are important influencing factors on local PM10 concentrations. With climate changing rapidly, these connections need to be better understood in order to provide estimates of climate change related consequences for air quality management purposes. For quantifying the link between large-scale atmospheric conditions and local PM10 concentrations circulation- and weather type classifications are used in a number of studies by using different statistical approaches. Thus far only few systematic attempts have been made to modify consisting or to develop new weather- and circulation type classifications in order to improve their ability to resolve local PM10 concentrations. In this contribution existing weather- and circulation type classifications, performed on daily 2.5 x 2.5 gridded parameters of the NCEP/NCAR reanalysis data set, are optimized with regard to their discriminative power for local PM10 concentrations at 49 Bavarian measurement sites for the period 1980 to 2011. Most of the PM10 stations are situated in urban areas covering urban background, traffic and industry related pollution regimes. The range of regimes is extended by a few rural background stations. To characterize the correspondence between the PM10 measurements of the different stations by spatial patterns, a regionalization by an s-mode principal component analysis is realized on the high-pass filtered data. The optimization of the circulation- and weather types is implemented using two representative
NASA Astrophysics Data System (ADS)
Corbin, Charles D.
Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite a growing interest in residential demand management. This work explores the potential for residential buildings to shape electric demand at the distribution feeder level in order to reduce peak demand, reduce system ramping, and increase load factor using detailed sub-hourly simulations of thousands of buildings coupled to distribution power flow software. More generally, this work develops a methodology for the directed optimization of residential HVAC operation using a distributed but directed MPC scheme that can be applied to today's programmable thermostat technologies to address the increasing variability in electric supply and demand. Case studies incorporating varying levels of renewable energy generation demonstrate the approach and highlight important considerations for large-scale residential model predictive control.
NASA Astrophysics Data System (ADS)
Tyralis, Hristos; Karakatsanis, Georgios; Tzouka, Katerina; Mamassis, Nikos
2015-04-01
The Greek electricity system is examined for the period 2002-2014. The demand load data are analysed at various time scales (hourly, daily, seasonal and annual) and they are related to the mean daily temperature and the gross domestic product (GDP) of Greece for the same time period. The prediction of energy demand, a product of the Greek Independent Power Transmission Operator, is also compared with the demand load. Interesting results about the change of the electricity demand scheme after the year 2010 are derived. This change is related to the decrease of the GDP, during the period 2010-2014. The results of the analysis will be used in the development of an energy forecasting system which will be a part of a framework for optimal planning of a large-scale hybrid renewable energy system in which hydropower plays the dominant role. Acknowledgement: This research was funded by the Greek General Secretariat for Research and Technology through the research project Combined REnewable Systems for Sustainable ENergy DevelOpment (CRESSENDO; grant number 5145)
Zhang, Jun; Koo, Imhoi; Wang, Bing; Gao, Qing-Wei; Zheng, Chun-Hou; Zhang, Xiang
2012-01-01
Retention index (RI) is useful for metabolite identification. However, when RI is integrated with mass spectral similarity for metabolite identification, many controversial RI threshold setup are reported in literatures. In this study, a large scale test dataset of 5844 compounds with both mass spectra and RI information were created from National Institute of Standards and Technology (NIST) repetitive mass spectra (MS) and RI library. Three MS similarity measures: NIST composite measure, the real part of Discrete Fourier Transform (DFT.R) and the detail of Discrete Wavelet Transform (DWT.D) were used to investigate the accuracy of compound identification using the test dataset. To imitate real identification experiments, NIST MS main library was employed as reference library and the test dataset was used as search data. Our study shows that the optimal RI thresholds are 22, 15, and 15 i.u. for the NIST composite, DFT.R and DWT.D measures, respectively, when the RI and mass spectral similarity are integrated for compound identification. Compared to the mass spectrum matching, using both RI and mass spectral matching can improve the identification accuracy by 1.7%, 3.5%, and 3.5% for the three mass spectral similarity measures, respectively. It is concluded that the improvement of RI matching for compound identification heavily depends on the method of MS spectral similarity measure and the accuracy of RI data. PMID:22771253
NASA Astrophysics Data System (ADS)
Martin, Elly; Treeby, Bradley E.
2015-10-01
To increase the effectiveness of high intensity focused ultrasound (HIFU) treatments, prediction of ultrasound propagation in biological tissues is essential, particularly where bones are present in the field. This requires complex full-wave computational models which account for nonlinearity, absorption, and heterogeneity. These models must be properly validated but there is a lack of analytical solutions which apply in these conditions. Experimental validation of the models is therefore essential. However, accurate measurement of HIFU fields is not trivial. Our aim is to establish rigorous methods for obtaining reference data sets with which to validate tissue realistic simulations of ultrasound propagation. Here, we present preliminary measurements which form an initial validation of simulations performed using the k-Wave MATLAB toolbox. Acoustic pressure was measured on a plane in the field of a focused ultrasound transducer in free field conditions to be used as a Dirichlet boundary condition for simulations. Rectangular and wedge shaped olive oil scatterers were placed in the field and further pressure measurements were made in the far field for comparison with simulations. Good qualitative agreement was observed between the measured and simulated nonlinear pressure fields.
NASA Astrophysics Data System (ADS)
Chatterjee, Renuka Gonella
2000-10-01
Reliable delivery of electric power is a major concern in both regulated and deregulated energy markets. Power transfers are limited due to voltage limit violations, thermal limits on transmission lines and instability. Voltage collapse is a catastrophic instability leading to cascaded tripping of network and generation equipment eventually causing blackouts. Most importantly, contingencies can trigger voltage collapse. The traditional tool for determining the distance to collapse is the repeated power flow technique. Power flow takes about 3 minutes for a case with over 18,000 buses. On an average it takes about 10 power flow solutions to determine the distance to collapse requiring 30 minutes of computation time. An attractive alternative is continuation, which takes approximately 15 minutes to compute the entire trajectory and the exact distance to collapse. Using a continuation method to compute the distance to collapse for 1336 contingencies would take about 14 days. Thus faster methods of contingency analysis for voltage collapse are required for planning and operating studies. Three new methodologies, lambda/MVA sensitivity, Nonlinear sensitivity and the 2n+1 method are presented for fast and accurate voltage collapse contingency analysis. Linear sensitivity techniques with admittance parameterization give poor distance to collapse predictions for large admittance branches. A new lambda/MVA sensitivity technique with branch MVA parameterization was developed to correct this error. The lambda/MVA algorithm can estimate 6689 single branch outage contingency bifurcation points of a 3493 bus power system with less than 3% relative error, except for two branches within 7%, in less than 4 minutes on a Pentium Pro 180 MHz PC. To facilitate analysis of multi-terminal branch outages and generator contingencies, the Nonlinear sensitivity method was developed. This method can rank 1336 multi-terminal contingencies of a 18,000 bus case with a speedup of 112 compared to
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Jiang, Chaowei; Wu, S. T.; Hu, Qiang; Feng, Xueshang E-mail: wus@uah.edu E-mail: fengx@spaceweather.ac.cn
2014-05-10
Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength ≲ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.
Mauch, H; Kümel, G; Hammer, H J
1980-01-01
for the preparation of gram amounts of IgM from human sera sedimentation at 100,000 g or treatment with ZnSO4 of the redissolved "euglobulin"-precipitate was compared to direct precipitation from the clarified serum by boric acid. Three alternative large scale purification procedures were developed, leading to an IgM-sample characterized as pure by various criteria. Inclusion of protein A chromatography proved to enhance the yield very considerably.
NASA Astrophysics Data System (ADS)
Uritsky, V. M.; Davila, J. M.; Jones, S. I.
2014-12-01
Solar Probe Plus and Solar Orbiter will provide detailed measurements in the inner heliosphere magnetically connected with the topologically complex and eruptive solar corona. Interpretation of these measurements will require accurate reconstruction of the large-scale coronal magnetic field. In a related presentation by S. Jones et al., we argue that such reconstruction can be performed using photospheric extrapolation methods constrained by white-light coronagraph images. Here, we present the image-processing component of this project dealing with an automated segmentation of fan-like coronal loop structures. In contrast to the existing segmentation codes designed for detecting small-scale closed loops in the vicinity of active regions, we focus on the large-scale geometry of the open-field coronal features observed at significant radial distances from the solar surface. The coronagraph images used for the loop segmentation are transformed into a polar coordinate system and undergo radial detrending and initial noise reduction. The preprocessed images are subject to an adaptive second order differentiation combining radial and azimuthal directions. An adjustable thresholding technique is applied to identify candidate coronagraph features associated with the large-scale coronal field. A blob detection algorithm is used to extract valid features and discard noisy data pixels. The obtained features are interpolated using higher-order polynomials which are used to derive empirical directional constraints for magnetic field extrapolation procedures based on photospheric magnetograms.
Newton Methods for Large Scale Problems in Machine Learning
ERIC Educational Resources Information Center
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Gain optimization with nonlinear controls
NASA Technical Reports Server (NTRS)
Slater, G. L.; Kandadai, R. D.
1982-01-01
An algorithm has been developed for the analysis and design of controls for nonlinear systems. The technical approach is to use statistical linearization to model the nonlinear dynamics of a system. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this report is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general however and numerical computation requires only that the specific nonlinearity be considered in the analysis.
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Tong, Shaocheng; Liu, Changliang; Li, Yongming; Zhang, Huaguang
2011-04-01
In this paper, an adaptive fuzzy decentralized robust output feedback control approach is proposed for a class of large-scale strict-feedback nonlinear systems without the measurements of the states. The nonlinear systems in this paper are assumed to possess unstructured uncertainties, time-varying delays, and unknown high-frequency gain sign. Fuzzy logic systems are used to approximate the unstructured uncertainties, K-filters are designed to estimate the unmeasured states, and a special Nussbaum gain function is introduced to solve the problem of unknown high-frequency gain sign. Combining the backstepping technique with adaptive fuzzy control theory, an adaptive fuzzy decentralized robust output feedback control scheme is developed. In order to obtain the stability of the closed-loop system, a new lemma is given and proved. Based on this lemma and Lyapunov-Krasovskii functions, it is proved that all the signals in the closed-loop system are uniformly ultimately bounded and that the tracking errors can converge to a small neighborhood of the origin. The effectiveness of the proposed approach is illustrated from simulation results.
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
Practical Aspects of Nonlinear Optimization.
1981-06-19
14. E. Levitan and B . Polyak, "Constrained Minimization Methods", USSR Comp. Math. and Math. Physics 6, 1, (1966). 15. J. May, "Solving Nonlinear...AD-AIO 858 MASSACHUSETTS INST OF TECH LEXINGTON LINCOLN LAB F/G 12/1 PRACTICAL ASPECTS OF NONLINEAR OPTIMIZATION.U) JUN 81 R B HOLMES, J W TOLLESON...dj, l<j< m , (2) with the understanding the Q so defined has a non-empty interior (is "solid"). No qualitative assumptions on the objective - i
Ramamurthy, Byravamurthy
2014-05-05
In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.
Vercelloni, Julie; Caley, M Julian; Kayal, Mohsen; Low-Choy, Samantha; Mengersen, Kerrie
2014-01-01
Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.
Vercelloni, Julie; Caley, M. Julian; Kayal, Mohsen; Low-Choy, Samantha; Mengersen, Kerrie
2014-01-01
Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making. PMID:25364915
Bhanu Prakash, G V S; Padmaja, V; Siva Kiran, R R
2008-04-01
Optimization of conidial production was achieved by response surface methodology (RSM), a powerful mathematical approach widely applied in the optimization of fermentation process, using the three substrates; rice, barley and sorghum at variable pH, moisture content and yeast extract concentrations. These three factors were found to be important, affecting Metarhizium anisopliae spore production. A 2(3) full factorial central composite design and RSM were applied to determine the optimal concentration of each variable. A second-order polynomial was determined by the multiple regression analysis of the experimental data. Moisture content of 75.68% for sorghum, 73.21% for barley and 22.34% for rice produced optimal results. Maximal conidial yield was recorded for rice at a pH of 7.01; at 7.06 for sorghum and at 6.76 for barley.
Guo, Xiwang; Liu, Shixin; Zhou, MengChu; Tian, Guangdong
2016-11-01
Disassembly modeling and planning are meaningful and important to the reuse, recovery, and recycling of obsolete and discarded products. However, the existing methods pay little or no attention to resources constraints, e.g., disassembly operators and tools. Thus a resulting plan when being executed may be ineffective in actual product disassembly. This paper proposes to model and optimize selective disassembly sequences subject to multiresource constraints to maximize disassembly profit. Moreover, two scatter search algorithms with different combination operators, namely one with precedence preserved crossover combination operator and another with path-relink combination operator, are designed to solve the proposed model. Their validity is shown by comparing them with the optimization results from well-known optimization software CPLEX for different cases. The experimental results illustrate the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Doolin, B. F.
1975-01-01
Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.
NASA Astrophysics Data System (ADS)
Onishi, Takashi; Mizuno, Masao; Yoshikawa, Tetsuya; Munemasa, Jun; Mizuno, Masataka; Kihara, Teruo; Araki, Hideki; Shirai, Yasuharu
2013-07-01
Improving the reflow characteristics of sputtered Cu films was attempted by optimizing the sputtering conditions. The reflow characteristics of films deposited under various sputtering conditions were evaluated by measuring their filling level in via holes. It was found that the reflow characteristics of the Cu films are strongly influenced by the deposition parameters. Deposition at low temperatures and the addition of H2 or N2 to the Ar sputtering gas had a significant influence on the reflow characteristics. Imperfections in the Cu films before and after the high-temperature, high-pressure treatments were investigated by positron annihilation spectroscopy. The results showed that low temperature and the addition of H2 or N2 led to films containing a large number of mono-vacancies, which accelerate atomic diffusion creep and dislocation core diffusion creep, improving the reflow characteristics of the Cu films.
Large-scale sequential quadratic programming algorithms
Eldersveld, S.K.
1992-09-01
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.
You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; ...
2016-01-12
This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less
You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; Liu, Yilu
2016-01-12
This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation method can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.
NASA Astrophysics Data System (ADS)
Gad-El-Hak, Mohamed
"Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.
Solving nonlinear equality constrained multiobjective optimization problems using neural networks.
Mestari, Mohammed; Benzirar, Mohammed; Saber, Nadia; Khouil, Meryem
2015-10-01
This paper develops a neural network architecture and a new processing method for solving in real time, the nonlinear equality constrained multiobjective optimization problem (NECMOP), where several nonlinear objective functions must be optimized in a conflicting situation. In this processing method, the NECMOP is converted to an equivalent scalar optimization problem (SOP). The SOP is then decomposed into several-separable subproblems processable in parallel and in a reasonable time by multiplexing switched capacitor circuits. The approach which we propose makes use of a decomposition-coordination principle that allows nonlinearity to be treated at a local level and where coordination is achieved through the use of Lagrange multipliers. The modularity and the regularity of the neural networks architecture herein proposed make it suitable for very large scale integration implementation. An application to the resolution of a physical problem is given to show that the approach used here possesses some advantages of the point of algorithmic view, and provides processes of resolution often simpler than the usual techniques.
NASA Astrophysics Data System (ADS)
Rafiee, Mohammad; Barrau, Axel; Bayen, Alexandre M.
2013-06-01
This article investigates the performance of Monte Carlo-based estimation methods for estimation of flow state in large-scale open channel networks. After constructing a state space model of the flow based on the Saint-Venant equations, we implement the optimal sampling importance resampling filter to perform state estimation in a case in which measurements are available at every time step. Considering a case in which measurements become available intermittently, a random-map implementation of the implicit particle filter is applied to estimate the state trajectory in the interval between the measurements. Finally, some heuristics are proposed, which are shown to improve the estimation results and lower the computational cost. In the first heuristics, considering the case in which measurements are available at every time step, we apply the implicit particle filter over time intervals of a desired size while incorporating all the available measurements over the corresponding time interval. As a second heuristic method, we introduce a maximum a posteriori (MAP) method, which does not require sampling. It will be seen, through implementation, that the MAP method provides more accurate results in the case of our application while having a smaller computational cost. All estimation methods are tested on a network of 19 tidally forced subchannels and 1 reservoir, Clifton Court Forebay, in Sacramento-San Joaquin Delta in California, and numerical results are presented.
Structural optimization for nonlinear dynamic response.
Dou, Suguang; Strachan, B Scott; Shaw, Steven W; Jensen, Jakob S
2015-09-28
Much is known about the nonlinear resonant response of mechanical systems, but methods for the systematic design of structures that optimize aspects of these responses have received little attention. Progress in this area is particularly important in the area of micro-systems, where nonlinear resonant behaviour is being used for a variety of applications in sensing and signal conditioning. In this work, we describe a computational method that provides a systematic means for manipulating and optimizing features of nonlinear resonant responses of mechanical structures that are described by a single vibrating mode, or by a pair of internally resonant modes. The approach combines techniques from nonlinear dynamics, computational mechanics and optimization, and it allows one to relate the geometric and material properties of structural elements to terms in the normal form for a given resonance condition, thereby providing a means for tailoring its nonlinear response. The method is applied to the fundamental nonlinear resonance of a clamped-clamped beam and to the coupled mode response of a frame structure, and the results show that one can modify essential normal form coefficients by an order of magnitude by relatively simple changes in the shape of these elements. We expect the proposed approach, and its extensions, to be useful for the design of systems used for fundamental studies of nonlinear behaviour as well as for the development of commercial devices that exploit nonlinear behaviour.
Large scale traffic simulations
Nagel, K.; Barrett, C.L. |; Rickert, M. |
1997-04-01
Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.
Large scale tracking algorithms
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao
Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
Nonlinear optimization for stochastic simulations.
Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.
2003-12-01
This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.
New Methods for Nonlinear Optimization.
1988-05-11
Gerald Shultz of Metropolitan State College in Denver in SIAM Journal on Numerical Analysis. The method has bccn, implemented with the aid of Emmanuel ...appear in Handbooks in Opera- tions Research and Management Science, Vol. 1, Optimization, G.L Nernhauser, A.H.G. Rinnooy Kan, and M.J. Todd , eds
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
Gain optimization with non-linear controls
NASA Technical Reports Server (NTRS)
Slater, G. L.; Kandadai, R. D.
1984-01-01
An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.
Optimal singular control for nonlinear semistabilisation
NASA Astrophysics Data System (ADS)
L'Afflitto, Andrea; Haddad, Wassim M.
2016-06-01
The singular optimal control problem for asymptotic stabilisation has been extensively studied in the literature. In this paper, the optimal singular control problem is extended to address a weaker version of closed-loop stability, namely, semistability, which is of paramount importance for consensus control of network dynamical systems. Three approaches are presented to address the nonlinear semistable singular control problem. Namely, a singular perturbation method is presented to construct a state-feedback singular controller that guarantees closed-loop semistability for nonlinear systems. In this approach, we show that for a non-negative cost-to-go function the minimum cost of a nonlinear semistabilising singular controller is lower than the minimum cost of a singular controller that guarantees asymptotic stability of the closed-loop system. In the second approach, we solve the nonlinear semistable singular control problem by using the cost-to-go function to cancel the singularities in the corresponding Hamilton-Jacobi-Bellman equation. For this case, we show that the minimum value of the singular performance measure is zero. Finally, we provide a framework based on the concepts of state-feedback linearisation and feedback equivalence to solve the singular control problem for semistabilisation of nonlinear dynamical systems. For this approach, we also show that the minimum value of the singular performance measure is zero. Three numerical examples are presented to demonstrate the efficacy of the proposed singular semistabilisation frameworks.
Optimized spectral estimation for nonlinear synchronizing systems
NASA Astrophysics Data System (ADS)
Sommerlade, Linda; Mader, Malenka; Mader, Wolfgang; Timmer, Jens; Thiel, Marco; Grebogi, Celso; Schelter, Björn
2014-03-01
In many fields of research nonlinear dynamical systems are investigated. When more than one process is measured, besides the distinct properties of the individual processes, their interactions are of interest. Often linear methods such as coherence are used for the analysis. The estimation of coherence can lead to false conclusions when applied without fulfilling several key assumptions. We introduce a data driven method to optimize the choice of the parameters for spectral estimation. Its applicability is demonstrated based on analytical calculations and exemplified in a simulation study. We complete our investigation with an application to nonlinear tremor signals in Parkinson's disease. In particular, we analyze electroencephalogram and electromyogram data.
Sensitivity technologies for large scale simulation.
Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard
2005-01-01
order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version
Nonlinear Brightness Optimization in Compton Scattering
Hartemann, Fred V.; Wu, Sheldon S. Q.
2013-07-26
In Compton scattering light sources, a laser pulse is scattered by a relativistic electron beam to generate tunable x and gamma rays. Because of the inhomogeneous nature of the incident radiation, the relativistic Lorentz boost of the electrons is modulated by the ponderomotive force during the interaction, leading to intrinsic spectral broadening and brightness limitations. We discuss these effects, along with an optimization strategy to properly balance the laser bandwidth, diffraction, and nonlinear ponderomotive force.
Nonlinear Brightness Optimization in Compton Scattering
NASA Astrophysics Data System (ADS)
Hartemann, Fred V.; Wu, Sheldon S. Q.
2013-07-01
In Compton scattering light sources, a laser pulse is scattered by a relativistic electron beam to generate tunable x and gamma rays. Because of the inhomogeneous nature of the incident radiation, the relativistic Lorentz boost of the electrons is modulated by the ponderomotive force during the interaction, leading to intrinsic spectral broadening and brightness limitations. These effects are discussed, along with an optimization strategy to properly balance the laser bandwidth, diffraction, and nonlinear ponderomotive force.
Large-scale instabilities of helical flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2016-10-01
Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.
A class of finite dimensional optimal nonlinear estimators
NASA Technical Reports Server (NTRS)
Marcus, S. I.; Willsky, A. S.
1974-01-01
Finite dimensional optimal nonlinear state estimators are derived for bilinear systems evolving on nilpotent and solvable Lie groups. These results are extended to other classes of systems involving polynomial nonlinearities. The concepts of exact differentials and path-independent integrals are used to derive optimal finite dimensional estimators for a further class of nonlinear systems.
Nonlinear simulations to optimize magnetic nanoparticle hyperthermia
Reeves, Daniel B. Weaver, John B.
2014-03-10
Magnetic nanoparticle hyperthermia is an attractive emerging cancer treatment, but the acting microscopic energy deposition mechanisms are not well understood and optimization suffers. We describe several approximate forms for the characteristic time of Néel rotations with varying properties and external influences. We then present stochastic simulations that show agreement between the approximate expressions and the micromagnetic model. The simulations show nonlinear imaginary responses and associated relaxational hysteresis due to the field and frequency dependencies of the magnetization. This suggests that efficient heating is possible by matching fields to particles instead of resorting to maximizing the power of the applied magnetic fields.
Nonlinear optimization simplified by hypersurface deformation
Stillinger, F.H.; Weber, T.A.
1988-09-01
A general strategy is advanced for simplifying nonlinear optimization problems, the ant-lion method. This approach exploits shape modifications of the cost-function hypersurface which distend basins surrounding low-lying minima (including global minima). By intertwining hypersurface deformations with steepest-descent displacements, the search is concentrated on a small relevant subset of all minima. Specific calculations demonstrating the value of this method are reported for the partitioning of two classes of irregular but nonrandom graphs, the prime-factor graphs and the pi graphs. We also indicate how this approach can be applied to the traveling salesman problem and to design layout optimization, and that it may be useful in combination with simulated annealing strategies.
Galaxy clustering on large scales.
Efstathiou, G
1993-01-01
I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400
Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R
2013-01-01
This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.
Schwarz, Christopher G; Senjem, Matthew L; Gunter, Jeffrey L; Tosakulwong, Nirubol; Weigand, Stephen D; Kemp, Bradley J; Spychalla, Anthony J; Vemuri, Prashanthi; Petersen, Ronald C; Lowe, Val J; Jack, Clifford R
2017-01-01
Quantitative measurements of change in β-amyloid load from Positron Emission Tomography (PET) images play a critical role in clinical trials and longitudinal observational studies of Alzheimer's disease. These measurements are strongly affected by methodological differences between implementations, including choice of reference region and use of partial volume correction, but there is a lack of consensus for an optimal method. Previous works have examined some relevant variables under varying criteria, but interactions between them prevent choosing a method via combined meta-analysis. In this work, we present a thorough comparison of methods to measure change in β-amyloid over time using Pittsburgh Compound B (PiB) PET imaging.
Economically viable large-scale hydrogen liquefaction
NASA Astrophysics Data System (ADS)
Cardella, U.; Decker, L.; Klein, H.
2017-02-01
The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.
Constrained optimization for image restoration using nonlinear programming
NASA Technical Reports Server (NTRS)
Yeh, C.-L.; Chin, R. T.
1985-01-01
The constrained optimization problem for image restoration, utilizing incomplete information and partial constraints, is formulated using nonlinear proramming techniques. This method restores a distorted image by optimizing a chosen object function subject to available constraints. The penalty function method of nonlinear programming is used. Both linear or nonlinear object function, and linear or nonlinear constraint functions can be incorporated in the formulation. This formulation provides a generalized approach to solve constrained optimization problems for image restoration. Experiments using this scheme have been performed. The results are compared with those obtained from other restoration methods and the comparative study is presented.
Nonlinearity Analysis and Parameters Optimization for an Inductive Angle Sensor
Ye, Lin; Yang, Ming; Xu, Liang; Zhuang, Xiaoqi; Dong, Zhaopeng; Li, Shiyang
2014-01-01
Using the finite element method (FEM) and particle swarm optimization (PSO), a nonlinearity analysis based on parameter optimization is proposed to design an inductive angle sensor. Due to the structure complexity of the sensor, understanding the influences of structure parameters on the nonlinearity errors is a critical step in designing an effective sensor. Key parameters are selected for the design based on the parameters' effects on the nonlinearity errors. The finite element method and particle swarm optimization are combined for the sensor design to get the minimal nonlinearity error. In the simulation, the nonlinearity error of the optimized sensor is 0.053% in the angle range from −60° to 60°. A prototype sensor is manufactured and measured experimentally, and the experimental nonlinearity error is 0.081% in the angle range from −60° to 60°. PMID:24590353
Matching trajectory optimization and nonlinear tracking control for HALE
NASA Astrophysics Data System (ADS)
Lee, Sangjong; Jang, Jieun; Ryu, Hyeok; Lee, Kyun Ho
2014-11-01
This paper concerns optimal trajectory generation and nonlinear tracking control for stratospheric airship platform of VIA-200. To compensate for the mismatch between the point-mass model of optimal trajectory and the 6-DOF model of the nonlinear tracking problem, a new matching trajectory optimization approach is proposed. The proposed idea reduces the dissimilarity of both problems and reduces the uncertainties in the nonlinear equations of motion for stratospheric airship. In addition, its refined optimal trajectories yield better results under jet stream conditions during flight. The resultant optimal trajectories of VIA-200 are full three-dimensional ascent flight trajectories reflecting the realistic constraints of flight conditions and airship performance with and without a jet stream. Finally, 6-DOF nonlinear equations of motion are derived, including a moving wind field, and the vectorial backstepping approach is applied. The desirable tracking performance is demonstrated that application of the proposed matching optimization method enables the smooth linkage of trajectory optimization to tracking control problems.
Colloquium: Large scale simulations on GPU clusters
NASA Astrophysics Data System (ADS)
Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano
2015-06-01
Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.
EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE
Bruni, Marco; Hidalgo, Juan Carlos; Wands, David
2014-10-10
We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.
A method for nonlinear optimization with discrete design variables
NASA Technical Reports Server (NTRS)
Olsen, Gregory R.; Vanderplaats, Garret N.
1987-01-01
A numerical method is presented for the solution of nonlinear discrete optimization problems. The applicability of discrete optimization to engineering design is discussed, and several standard structural optimization problems are solved using discrete design variables. The method uses approximation techniques to create subproblems suitable for linear mixed-integer programming methods. The method employs existing software for continuous optimization and integer programming.
Optimal linear estimation under unknown nonlinear transform
Yi, Xinyang; Wang, Zhaoran; Caramanis, Constantine; Liu, Han
2016-01-01
Linear regression studies the problem of estimating a model parameter β* ∈ℝp, from n observations {(yi,xi)}i=1n from linear model yi = 〈xi, β*〉 + εi. We consider a significant generalization in which the relationship between 〈xi, β*〉 and yi is noisy, quantized to a single bit, potentially nonlinear, noninvertible, as well as unknown. This model is known as the single-index model in statistics, and, among other things, it represents a significant generalization of one-bit compressed sensing. We propose a novel spectral-based estimation procedure and show that we can recover β* in settings (i.e., classes of link function f) where previous algorithms fail. In general, our algorithm requires only very mild restrictions on the (unknown) functional relationship between yi and 〈xi, β*〉. We also consider the high dimensional setting where β* is sparse, and introduce a two-stage nonconvex framework that addresses estimation challenges in high dimensional regimes where p ≫ n. For a broad class of link functions between 〈xi, β*〉 and yi, we establish minimax lower bounds that demonstrate the optimality of our estimators in both the classical and high dimensional regimes.
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
Guaranteed robustness properties of multivariable, nonlinear, stochastic optimal regulators
NASA Technical Reports Server (NTRS)
Tsitsiklis, J. N.; Athans, M.
1983-01-01
The robustness of optimal regulators for nonlinear, deterministic and stochastic, multi-input dynamical systems is studied under the assumption that all state variables can be measured. It is shown that, under mild assumptions, such nonlinear regulators have a guaranteed infinite gain margin; moreover, they have a guaranteed 50 percent gain reduction margin and a 60 degree phase margin, in each feedback channel, provided that the system is linear in the control and the penalty to the control is quadratic, thus extending the well-known properties of LQ regulators to nonlinear optimal designs. These results are also valid for infinite horizon, average cost, stochastic optimal control problems.
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
NASA Astrophysics Data System (ADS)
Diwadkar, Amit; Vaidya, Umesh
2016-04-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies.
Large-scale PACS implementation.
Carrino, J A; Unkel, P J; Miller, I D; Bowser, C L; Freckleton, M W; Johnson, T G
1998-08-01
The transition to filmless radiology is a much more formidable task than making the request for proposal to purchase a (Picture Archiving and Communications System) PACS. The Department of Defense and the Veterans Administration have been pioneers in the transformation of medical diagnostic imaging to the electronic environment. Many civilian sites are expected to implement large-scale PACS in the next five to ten years. This presentation will related the empirical insights gleaned at our institution from a large-scale PACS implementation. Our PACS integration was introduced into a fully operational department (not a new hospital) in which work flow had to continue with minimal impact. Impediments to user acceptance will be addressed. The critical components of this enormous task will be discussed. The topics covered during this session will include issues such as phased implementation, DICOM (digital imaging and communications in medicine) standard-based interaction of devices, hospital information system (HIS)/radiology information system (RIS) interface, user approval, networking, workstation deployment and backup procedures. The presentation will make specific suggestions regarding the implementation team, operating instructions, quality control (QC), training and education. The concept of identifying key functional areas is relevant to transitioning the facility to be entirely on line. Special attention must be paid to specific functional areas such as the operating rooms and trauma rooms where the clinical requirements may not match the PACS capabilities. The printing of films may be necessary for certain circumstances. The integration of teleradiology and remote clinics into a PACS is a salient topic with respect to the overall role of the radiologists providing rapid consultation. A Web-based server allows a clinician to review images and reports on a desk-top (personal) computer and thus reduce the number of dedicated PACS review workstations. This session
On a Highly Nonlinear Self-Obstacle Optimal Control Problem
Di Donato, Daniela; Mugnai, Dimitri
2015-10-15
We consider a non-quadratic optimal control problem associated to a nonlinear elliptic variational inequality, where the obstacle is the control itself. We show that, fixed a desired profile, there exists an optimal solution which is not far from it. Detailed characterizations of the optimal solution are given, also in terms of approximating problems.
Large-Scale Sequence Comparison.
Lal, Devi; Verma, Mansi
2017-01-01
There are millions of sequences deposited in genomic databases, and it is an important task to categorize them according to their structural and functional roles. Sequence comparison is a prerequisite for proper categorization of both DNA and protein sequences, and helps in assigning a putative or hypothetical structure and function to a given sequence. There are various methods available for comparing sequences, alignment being first and foremost for sequences with a small number of base pairs as well as for large-scale genome comparison. Various tools are available for performing pairwise large sequence comparison. The best known tools either perform global alignment or generate local alignments between the two sequences. In this chapter we first provide basic information regarding sequence comparison. This is followed by the description of the PAM and BLOSUM matrices that form the basis of sequence comparison. We also give a practical overview of currently available methods such as BLAST and FASTA, followed by a description and overview of tools available for genome comparison including LAGAN, MumMER, BLASTZ, and AVID.
Large Scale Magnetostrictive Valve Actuator
NASA Technical Reports Server (NTRS)
Richard, James A.; Holleman, Elizabeth; Eddleman, David
2008-01-01
Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.
Large scale cluster computing workshop
Dane Skow; Alan Silverman
2002-12-23
Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.
Lyapunov optimal feedback control of a nonlinear inverted pendulum
NASA Technical Reports Server (NTRS)
Grantham, W. J.; Anderson, M. J.
1989-01-01
Liapunov optimal feedback control is applied to a nonlinear inverted pendulum in which the control torque was constrained to be less than the nonlinear gravity torque in the model. This necessitates a control algorithm which 'rocks' the pendulum out of its potential wells, in order to stabilize it at a unique vertical position. Simulation results indicate that a preliminary Liapunov feedback controller can successfully overcome the nonlinearity and bring almost all trajectories to the target.
Large-scale Intelligent Transporation Systems simulation
Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.
1995-06-01
A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.
Large-Scale Information Systems
D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura
2000-12-01
Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.
Aircraft nonlinear optimal control using fuzzy gain scheduling
NASA Astrophysics Data System (ADS)
Nusyirwan, I. F.; Kung, Z. Y.
2016-10-01
Fuzzy gain scheduling is a common solution for nonlinear flight control. The highly nonlinear region of flight dynamics is determined throughout the examination of eigenvalues and the irregular pattern of root locus plots that show the nonlinear characteristic. By using the optimal control for command tracking, the pitch rate stability augmented system is constructed and the longitudinal flight control system is established. The outputs of optimal control for 21 linear systems are fed into the fuzzy gain scheduler. This research explores the capability in using both optimal control and fuzzy gain scheduling to improve the efficiency in finding the optimal control gains and to achieve Level 1 flying qualities. The numerical simulation work is carried out to determine the effectiveness and performance of the entire flight control system. The simulation results show that the fuzzy gain scheduling technique is able to perform in real time to find near optimal control law in various flying conditions.
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
Large-scale tides in general relativity
NASA Astrophysics Data System (ADS)
Ip, Hiu Yan; Schmidt, Fabian
2017-02-01
Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.
Supporting large-scale computational science
Musick, R., LLNL
1998-02-19
Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.
Modulation analysis of large-scale discrete vortices.
Cisneros, Luis A; Minzoni, Antonmaria A; Panayotaros, Panayotis; Smyth, Noel F
2008-09-01
The behavior of large-scale vortices governed by the discrete nonlinear Schrödinger equation is studied. Using a discrete version of modulation theory, it is shown how vortices are trapped and stabilized by the self-consistent Peierls-Nabarro potential that they generate in the lattice. Large-scale circular and polygonal vortices are studied away from the anticontinuum limit, which is the limit considered in previous studies. In addition numerical studies are performed on large-scale, straight structures, and it is found that they are stabilized by a nonconstant mean level produced by standing waves generated at the ends of the structure. Finally, numerical evidence is produced for long-lived, localized, quasiperiodic structures.
A relativistic signature in large-scale structure
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
Optimization under uncertainty of parallel nonlinear energy sinks
NASA Astrophysics Data System (ADS)
Boroson, Ethan; Missoum, Samy; Mattei, Pierre-Olivier; Vergez, Christophe
2017-04-01
Nonlinear Energy Sinks (NESs) are a promising technique for passively reducing the amplitude of vibrations. Through nonlinear stiffness properties, a NES is able to passively and irreversibly absorb energy. Unlike the traditional Tuned Mass Damper (TMD), NESs do not require a specific tuning and absorb energy over a wider range of frequencies. Nevertheless, they are still only efficient over a limited range of excitations. In order to mitigate this limitation and maximize the efficiency range, this work investigates the optimization of multiple NESs configured in parallel. It is well known that the efficiency of a NES is extremely sensitive to small perturbations in loading conditions or design parameters. In fact, the efficiency of a NES has been shown to be nearly discontinuous in the neighborhood of its activation threshold. For this reason, uncertainties must be taken into account in the design optimization of NESs. In addition, the discontinuities require a specific treatment during the optimization process. In this work, the objective of the optimization is to maximize the expected value of the efficiency of NESs in parallel. The optimization algorithm is able to tackle design variables with uncertainty (e.g., nonlinear stiffness coefficients) as well as aleatory variables such as the initial velocity of the main system. The optimal design of several parallel NES configurations for maximum mean efficiency is investigated. Specifically, NES nonlinear stiffness properties, considered random design variables, are optimized for cases with 1, 2, 3, 4, 5, and 10 NESs in parallel. The distributions of efficiency for the optimal parallel configurations are compared to distributions of efficiencies of non-optimized NESs. It is observed that the optimization enables a sharp increase in the mean value of efficiency while reducing the corresponding variance, thus leading to more robust NES designs.
Lagrangian space consistency relation for large scale structure
Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lh399@columbia.edu
2015-09-01
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.
Asynchronous parallel pattern search for nonlinear optimization
P. D. Hough; T. G. Kolda; V. J. Torczon
2000-01-01
Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems
Optimal ignition placement using nonlinear adjoint looping
NASA Astrophysics Data System (ADS)
Qadri, Ubaid; Schmid, Peter; Magri, Luca; Ihme, Matthias
2016-11-01
Spark ignition of a turbulent mixture of fuel and oxidizer is a highly sensitive process. Traditionally, a large number of parametric studies are used to determine the effects of different factors on ignition and this can be quite tedious. In contrast, we treat ignition as an initial value problem and seek to find the initial condition that maximizes a given cost function. We use direct numerical simulation of the low Mach number equations with finite rate one-step chemistry, and of the corresponding adjoint equations, to study an axisymmetric jet diffusion flame. We find the L - 2 norm of the temperature field integrated over a short time to be a suitable cost function. We find that the adjoint fields localize around the flame front, identifying the most sensitive region of the flow. The adjoint fields provide gradient information that we use as part of an optimization loop to converge to a local optimal ignition location. We find that the optimal locations correspond with the stoichiometric surface downstream of the jet inlet plane. The methods and results of this study can be easily applied to more complex flow geometries.
Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications
2015-06-24
problems. The size 16 three-dimensional quadratic assignment problem Q3AP from wireless communications was solved using a sophisticated approach...combinatorial optimization problem, the Directional Sensor Problem, was solved in two ways. First, heuristically in an engineering fashion and second, exactly...the sensor problem was solved as a nonlinear MINLP problem. Specifically, the information gain obtained was maximized in order to determine the optimal
Large scale mechanical metamaterials as seismic shields
NASA Astrophysics Data System (ADS)
Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.
2016-08-01
Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.
Optimal state discrimination and unstructured search in nonlinear quantum mechanics
NASA Astrophysics Data System (ADS)
Childs, Andrew M.; Young, Joshua
2016-02-01
Nonlinear variants of quantum mechanics can solve tasks that are impossible in standard quantum theory, such as perfectly distinguishing nonorthogonal states. Here we derive the optimal protocol for distinguishing two states of a qubit using the Gross-Pitaevskii equation, a model of nonlinear quantum mechanics that arises as an effective description of Bose-Einstein condensates. Using this protocol, we present an algorithm for unstructured search in the Gross-Pitaevskii model, obtaining an exponential improvement over a previous algorithm of Meyer and Wong. This result establishes a limitation on the effectiveness of the Gross-Pitaevskii approximation. More generally, we demonstrate similar behavior under a family of related nonlinearities, giving evidence that the ability to quickly discriminate nonorthogonal states and thereby solve unstructured search is a generic feature of nonlinear quantum mechanics.
Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.
Baranwal, Vipul K; Pandey, Ram K; Singh, Om P
2014-01-01
We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.
[Issues of large scale tissue culture of medicinal plant].
Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai
2014-09-01
In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.
Nonlinear optimization with linear constraints using a projection method
NASA Technical Reports Server (NTRS)
Fox, T.
1982-01-01
Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
Optimal bipedal interactions with dynamic terrain: synthesis and analysis via nonlinear programming
NASA Astrophysics Data System (ADS)
Hubicki, Christian; Goldman, Daniel; Ames, Aaron
In terrestrial locomotion, gait dynamics and motor control behaviors are tuned to interact efficiently and stably with the dynamics of the terrain (i.e. terradynamics). This controlled interaction must be particularly thoughtful in bipeds, as their reduced contact points render them highly susceptible to falls. While bipedalism under rigid terrain assumptions is well-studied, insights for two-legged locomotion on soft terrain, such as sand and dirt, are comparatively sparse. We seek an understanding of how biological bipeds stably and economically negotiate granular media, with an eye toward imbuing those abilities in bipedal robots. We present a trajectory optimization method for controlled systems subject to granular intrusion. By formulating a large-scale nonlinear program (NLP) with reduced-order resistive force theory (RFT) models and jamming cone dynamics, the optimized motions are informed and shaped by the dynamics of the terrain. Using a variant of direct collocation methods, we can express all optimization objectives and constraints in closed-form, resulting in rapid solving by standard NLP solvers, such as IPOPT. We employ this tool to analyze emergent features of bipedal locomotion in granular media, with an eye toward robotic implementation.
Route Monopolie and Optimal Nonlinear Pricing
NASA Technical Reports Server (NTRS)
Tournut, Jacques
2003-01-01
To cope with air traffic growth and congested airports, two solutions are apparent on the supply side: 1) use larger aircraft in the hub and spoke system; or 2) develop new routes through secondary airports. An enlarged route system through secondary airports may increase the proportion of route monopolies in the air transport market.The monopoly optimal non linear pricing policy is well known in the case of one dimension (one instrument, one characteristic) but not in the case of several dimensions. This paper explores the robustness of the one dimensional screening model with respect to increasing the number of instruments and the number of characteristics. The objective of this paper is then to link and fill the gap in both literatures. One of the merits of the screening model has been to show that a great varieD" of economic questions (non linear pricing, product line choice, auction design, income taxation, regulation...) could be handled within the same framework.VCe study a case of non linear pricing (2 instruments (2 routes on which the airline pro_ddes customers with services), 2 characteristics (demand of services on these routes) and two values per characteristic (low and high demand of services on these routes)) and we show that none of the conclusions of the one dimensional analysis remain valid. In particular, upward incentive compatibility constraint may be binding at the optimum. As a consequence, they may be distortion at the top of the distribution. In addition to this, we show that the optimal solution often requires a kind of form of bundling, we explain explicitly distortions and show that it is sometimes optimal for the monopolist to only produce one good (instead of two) or to exclude some buyers from the market. Actually, this means that the monopolist cannot fully apply his monopoly power and is better off selling both goods independently.We then define all the possible solutions in the case of a quadratic cost function for a uniform
Optimization of optical nonlinearities in quantum cascade lasers
NASA Astrophysics Data System (ADS)
Bai, Jing
Nonlinearities in quantum cascade lasers (QCL's) have wide applications in wavelength tunability and ultra-short pulse generation. In this thesis, optical nonlinearities in InGaAs/AlInAs-based mid-infrared (MIR) QCL's with quadruple resonant levels are investigated. Design optimization for the second-harmonic generation (SHG) of the device is presented. Performance characteristics associated with the third-order nonlinearities are also analyzed. The design optimization for SHG efficiency is obtained utilizing techniques from supersymmetric quantum mechanics (SUSYQM) with both material-dependent effective mass and band nonparabolicity. Current flow and power output of the structure are analyzed by self-consistently solving rate equations for the carriers and photons. Nonunity pumping efficiency from one period of the QCL to the next is taken into account by including all relevant electron-electron (e-e) and longitudinal (LO) phonon scattering mechanisms between the injector/collector and active regions. Two-photon absorption processes are analyzed for the resonant cascading triple levels designed for enhancing SHG. Both sequential and simultaneous two-photon absorption processes are included in the rate-equation model. The current output characteristics for both the original and optimized structures are analyzed and compared. Stronger resonant tunneling in the optimized structure is manifested by enhanced negative differential resistance. Current-dependent linear optical output power is derived based on the steady-state photon populations in the active region. The second-harmonic (SH) power is derived from the Maxwell equations with the phase mismatch included. Due to stronger coupling between lasing levels, the optimized structure has both higher linear and nonlinear output powers. Phase mismatch effects are significant for both structures leading to a substantial reduction of the linear-to-nonlinear conversion efficiency. The optimized structure can be fabricated
Large Scale Metal Additive Techniques Review
Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W; Love, Lonnie J
2016-01-01
In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.
Large-scale regions of antimatter
Grobov, A. V. Rubin, S. G.
2015-07-15
Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.
The new discussion of a neutrino mass and issues in the formation of large-scale structure
NASA Technical Reports Server (NTRS)
Melott, Adrian L.
1991-01-01
It is argued that the discrepancy between the large-scale structure predicted by cosmological models with neutrino mass (hot dark matter) do not differ drastically from the observed structure. Evidence from the correlation amplitude, nonlinearity and the onset of galaxy formation, large-scale streaming velocities, and the topology of large-scale structure is considered. Hot dark matter models seem to be as accurate predictors of the large-scale structure as are cold dark matter models.
Design of Life Extending Controls Using Nonlinear Parameter Optimization
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Holmes, Michael S.; Ray, Asok
1998-01-01
This report presents the conceptual development of a life extending control system where the objective is to achieve high performance and structural durability of the plant. A life extending controller is designed for a reusable rocket engine via damage mitigation in both the fuel and oxidizer turbines while achieving high performance for transient responses of the combustion chamber pressure and the O2/H2 mixture ratio. This design approach makes use of a combination of linear and nonlinear controller synthesis techniques and also allows adaptation of the life extending controller module to augment a conventional performance controller of a rocket engine. The nonlinear aspect of the design is achieved using nonlinear parameter optimization of a prescribed control structure.
Large-scale cortical networks and cognition.
Bressler, S L
1995-03-01
The well-known parcellation of the mammalian cerebral cortex into a large number of functionally distinct cytoarchitectonic areas presents a problem for understanding the complex cortical integrative functions that underlie cognition. How do cortical areas having unique individual functional properties cooperate to accomplish these complex operations? Do neurons distributed throughout the cerebral cortex act together in large-scale functional assemblages? This review examines the substantial body of evidence supporting the view that complex integrative functions are carried out by large-scale networks of cortical areas. Pathway tracing studies in non-human primates have revealed widely distributed networks of interconnected cortical areas, providing an anatomical substrate for large-scale parallel processing of information in the cerebral cortex. Functional coactivation of multiple cortical areas has been demonstrated by neurophysiological studies in non-human primates and several different cognitive functions have been shown to depend on multiple distributed areas by human neuropsychological studies. Electrophysiological studies on interareal synchronization have provided evidence that active neurons in different cortical areas may become not only coactive, but also functionally interdependent. The computational advantages of synchronization between cortical areas in large-scale networks have been elucidated by studies using artificial neural network models. Recent observations of time-varying multi-areal cortical synchronization suggest that the functional topology of a large-scale cortical network is dynamically reorganized during visuomotor behavior.
Constrained nonlinear optimization approaches to color-signal separation.
Chang, P R; Hsieh, T H
1995-01-01
Separating a color signal into illumination and surface reflectance components is a fundamental issue in color reproduction and constancy. This can be carried out by minimizing the error in the least squares (LS) fit of the product of the illumination and the surface spectral reflectance to the actual color signal. When taking in account the physical realizability constraints on the surface reflectance and illumination, the feasible solutions to the nonlinear LS problem should satisfy a number of linear inequalities. Four distinct novel optimization algorithms are presented to employ these constraints to minimize the nonlinear LS fitting error. The first approach, which is based on Ritter's superlinear convergent method (Luengerger, 1980), provides a computationally superior algorithm to find the minimum solution to the nonlinear LS error problem subject to linear inequality constraints. Unfortunately, this gradient-like algorithm may sometimes be trapped at a local minimum or become unstable when the parameters involved in the algorithm are not tuned properly. The remaining three methods are based on the stable and promising global minimizer called simulated annealing. The annealing algorithm can always find the global minimum solution with probability one, but its convergence is slow. To tackle this, a cost-effective variable-separable formulation based on the concept of Golub and Pereyra (1973) is adopted to reduce the nonlinear LS problem to be a small-scale nonlinear LS problem. The computational efficiency can be further improved when the original Boltzman generating distribution of the classical annealing is replaced by the Cauchy distribution.
Optimizing Nonlinear Beam Coupling in Low-Symmetry Crystals (Postprint)
2014-10-02
AFRL-RX-WP-JA-2016-0242 OPTIMIZING NONLINEAR BEAM COUPLING IN LOW- SYMMETRY CRYSTALS (POSTPRINT) A. Shumelyuk, A. Volkov, and S...BEAM COUPLING IN LOW- SYMMETRY CRYSTALS (POSTPRINT) 5a. CONTRACT NUMBER FA8650-09-D-5434-0011 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...experimentally with Sn2P2S6. 15. SUBJECT TERMS Low- symmetry photorefractive crystals, two-beam coupling, transmission space-charge gratings 16. SECURITY
Robust optimization of nonlinear impulsive rendezvous with uncertainty
NASA Astrophysics Data System (ADS)
Luo, YaZhong; Yang, Zhen; Li, HengNian
2014-04-01
The optimal rendezvous trajectory designs in many current research efforts do not incorporate the practical uncertainties into the closed loop of the design. A robust optimization design method for a nonlinear rendezvous trajectory with uncertainty is proposed in this paper. One performance index related to the variances of the terminal state error is termed the robustness performance index, and a two-objective optimization model (including the minimum characteristic velocity and the minimum robustness performance index) is formulated on the basis of the Lambert algorithm. A multi-objective, non-dominated sorting genetic algorithm is employed to obtain the Pareto optimal solution set. It is shown that the proposed approach can be used to quickly obtain several inherent principles of the rendezvous trajectory by taking practical errors into account. Furthermore, this approach can identify the most preferable design space in which a specific solution for the actual application of the rendezvous control should be chosen.
Global nonlinear optimization of spacecraft protective structures design
NASA Technical Reports Server (NTRS)
Mog, R. A.; Lovett, J. N., Jr.; Avans, S. L.
1990-01-01
The global optimization of protective structural designs for spacecraft subject to hypervelocity meteoroid and space debris impacts is presented. This nonlinear problem is first formulated for weight minimization of the space station core module configuration using the Nysmith impact predictor. Next, the equivalence and uniqueness of local and global optima is shown using properties of convexity. This analysis results in a new feasibility condition for this problem. The solution existence is then shown, followed by a comparison of optimization techniques. Finally, a sensitivity analysis is presented to determine the effects of variations in the systemic parameters on optimal design. The results show that global optimization of this problem is unique and may be achieved by a number of methods, provided the feasibility condition is satisfied. Furthermore, module structural design thicknesses and weight increase with increasing projectile velocity and diameter and decrease with increasing separation between bumper and wall for the Nysmith predictor.
Optimal design for nonlinear estimation of the hemodynamic response function.
Maus, Bärbel; van Breukelen, Gerard J P; Goebel, Rainer; Berger, Martijn P F
2012-06-01
Subject-specific hemodynamic response functions (HRFs) have been recommended to capture variation in the form of the hemodynamic response between subjects (Aguirre et al., [ 1998]: Neuroimage 8:360-369). The purpose of this article is to find optimal designs for estimation of subject-specific parameters for the double gamma HRF. As the double gamma function is a nonlinear function of its parameters, optimal design theory for nonlinear models is employed in this article. The double gamma function is linearized by a Taylor approximation and the maximin criterion is used to handle dependency of the D-optimal design on the expansion point of the Taylor approximation. A realistic range of double gamma HRF parameters is used for the expansion point of the Taylor approximation. Furthermore, a genetic algorithm (GA) (Kao et al., [ 2009]: Neuroimage 44:849-856) is applied to find locally optimal designs for the different expansion points and the maximin design chosen from the locally optimal designs is compared to maximin designs obtained by m-sequences, blocked designs, designs with constant interstimulus interval (ISI) and random event-related designs. The maximin design obtained by the GA is most efficient. Random event-related designs chosen from several generated designs and m-sequences have a high efficiency, while blocked designs and designs with a constant ISI have a low efficiency compared to the maximin GA design.
Survey on large scale system control methods
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1987-01-01
The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.
Large-scale nanophotonic phased array.
Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R
2013-01-10
Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.
The large-scale distribution of galaxies
NASA Technical Reports Server (NTRS)
Geller, Margaret J.
1989-01-01
The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.
US National Large-scale City Orthoimage Standard Initiative
Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.
2003-01-01
The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.
Moon-based Earth Observation for Large Scale Geoscience Phenomena
NASA Astrophysics Data System (ADS)
Guo, Huadong; Liu, Guang; Ding, Yixing
2016-07-01
The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.
Noise and Nonlinear Estimation with Optimal Schemes in DTI
Özcan, Alpay
2010-01-01
In general, the estimation of the diffusion properties for diffusion tensor experiments (DTI) is accomplished via least squares estimation (LSE). The technique requires applying the logarithm to the measurements, which causes bad propagation of errors. Moreover, the way noise is injected to the equations invalidates the least squares estimate as the best linear unbiased estimate. Nonlinear estimation (NE), despite its longer computation time, does not possess any of these problems. However, all of the conditions and optimization methods developed in the past are based on the coefficient matrix obtained in a LSE setup. In this manuscript, nonlinear estimation for DTI is analyzed to demonstrate that any result obtained relatively easily in a linear algebra setup about the coefficient matrix can be applied to the more complicated NE framework. The data, obtained earlier using non–optimal and optimized diffusion gradient schemes, are processed with NE. In comparison with LSE, the results show significant improvements, especially for the optimization criterion. However, NE does not resolve the existing conflicts and ambiguities displayed with LSE methods. PMID:20655681
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
Management of large-scale technology
NASA Technical Reports Server (NTRS)
Levine, A.
1985-01-01
Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.
Evaluating Large-Scale Interactive Radio Programmes
ERIC Educational Resources Information Center
Potter, Charles; Naidoo, Gordon
2009-01-01
This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…
Solving Large-scale Eigenvalue Problems in SciDACApplications
Yang, Chao
2005-06-29
Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended
Spin glasses and nonlinear constraints in portfolio optimization
NASA Astrophysics Data System (ADS)
Andrecut, M.
2014-01-01
We discuss the portfolio optimization problem with the obligatory deposits constraint. Recently it has been shown that as a consequence of this nonlinear constraint, the solution consists of an exponentially large number of optimal portfolios, completely different from each other, and extremely sensitive to any changes in the input parameters of the problem, making the concept of rational decision making questionable. Here we reformulate the problem using a quadratic obligatory deposits constraint, and we show that from the physics point of view, finding an optimal portfolio amounts to calculating the mean-field magnetizations of a random Ising model with the constraint of a constant magnetization norm. We show that the model reduces to an eigenproblem, with 2N solutions, where N is the number of assets defining the portfolio. Also, in order to illustrate our results, we present a detailed numerical example of a portfolio of several risky common stocks traded on the Nasdaq Market.
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
Minimax Techniques For Optimizing Non-Linear Image Algebra Transforms
NASA Astrophysics Data System (ADS)
Davidson, Jennifer L.
1989-08-01
It has been well established that the Air Force Armament Technical Laboratory (AFATL) image algebra is capable of expressing all linear transformations [7]. The embedding of the linear algebra in the image algebra makes this possible. In this paper we show a relation of the image algebra to another algebraic system called the minimax algebra. This system is used extensively in economics and operations research, but until now has not been investigated for applications to image processing. The relationship is exploited to develop new optimization methods for a class of non-linear image processing transforms. In particular, a general decomposition technique for templates in this non-linear domain is presented. Template decomposition techniques are an important tool in mapping algorithms efficiently to both sequential and massively parallel architectures.
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
2010-07-01
A Nonlinear Optimal Control Design using Narrowband Perturbation Feedback for Magnetostrictive Actuators William S. Oates1, Rick Zrostlik2, Scott...Abstract Nonlinear optimal and narrowband feedback control designs are developed and experimentally implemented on a magnetostrictive Terfenol-D...utilizing narrowband feedback. A narrowband filter is implemented by treating the nonlinear and hysteretic magnetostrictive constitutive behavior as
In the fast lane: large-scale bacterial genome engineering.
Fehér, Tamás; Burland, Valerie; Pósfai, György
2012-07-31
The last few years have witnessed rapid progress in bacterial genome engineering. The long-established, standard ways of DNA synthesis, modification, transfer into living cells, and incorporation into genomes have given way to more effective, large-scale, robust genome modification protocols. Expansion of these engineering capabilities is due to several factors. Key advances include: (i) progress in oligonucleotide synthesis and in vitro and in vivo assembly methods, (ii) optimization of recombineering techniques, (iii) introduction of parallel, large-scale, combinatorial, and automated genome modification procedures, and (iv) rapid identification of the modifications by barcode-based analysis and sequencing. Combination of the brute force of these techniques with sophisticated bioinformatic design and modeling opens up new avenues for the analysis of gene functions and cellular network interactions, but also in engineering more effective producer strains. This review presents a summary of recent technological advances in bacterial genome engineering.
Large-scale Advanced Propfan (LAP) program
NASA Technical Reports Server (NTRS)
Sagerser, D. A.; Ludemann, S. G.
1985-01-01
The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.
Condition Monitoring of Large-Scale Facilities
NASA Technical Reports Server (NTRS)
Hall, David L.
1999-01-01
This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.
Large-Scale Aerosol Modeling and Analysis
2008-09-30
aerosol species up to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the...bacteria in large-scale dust storms is expected to significantly impact warm ice cloud formation, human health, and ecosystems globally. In Niemi et al
Large-Scale Visual Data Analysis
NASA Astrophysics Data System (ADS)
Johnson, Chris
2014-04-01
Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.
Large-scale neuromorphic computing systems
NASA Astrophysics Data System (ADS)
Furber, Steve
2016-10-01
Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.
Optimizing BAO measurements with non-linear transformations of the Lyman-α forest
Wang, Xinkang; Font-Ribera, Andreu; Seljak, Uroš E-mail: afont@lbl.gov
2015-04-01
We explore the effect of applying a non-linear transformation to the Lyman-α forest transmitted flux F=e{sup −τ} and the ability of analytic models to predict the resulting clustering amplitude. Both the large-scale bias of the transformed field (signal) and the amplitude of small scale fluctuations (noise) can be arbitrarily modified, but we were unable to find a transformation that increases significantly the signal-to-noise ratio on large scales using Taylor expansion up to the third order. In particular, however, we achieve a 33% improvement in signal to noise for Gaussianized field in transverse direction. On the other hand, we explore an analytic model for the large-scale biasing of the Lyα forest, and present an extension of this model to describe the biasing of the transformed fields. Using hydrodynamic simulations we show that the model works best to describe the biasing with respect to velocity gradients, but is less successful in predicting the biasing with respect to large-scale density fluctuations, especially for very nonlinear transformations.
Phase retrieval with transverse translation diversity: a nonlinear optimization approach.
Guizar-Sicairos, Manuel; Fienup, James R
2008-05-12
We develop and test a nonlinear optimization algorithm for solving the problem of phase retrieval with transverse translation diversity, where the diverse far-field intensity measurements are taken after translating the object relative to a known illumination pattern. Analytical expressions for the gradient of a squared-error metric with respect to the object, illumination and translations allow joint optimization of the object and system parameters. This approach achieves superior reconstructions, with respect to a previously reported technique [H. M. L. Faulkner and J. M. Rodenburg, Phys. Rev. Lett. 93, 023903 (2004)], when the system parameters are inaccurately known or in the presence of noise. Applicability of this method for samples that are smaller than the illumination pattern is explored.
Safe microburst penetration techniques: A deterministic, nonlinear, optimal control approach
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1987-01-01
A relatively large amount of computer time was used for the calculation of a optimal trajectory, but it is subject to reduction with moderate effort. The Deterministic, Nonlinear, Optimal Control algorithm yielded excellent aircraft performance in trajectory tracking for the given microburst. It did so by varying the angle of attack to counteract the lift effects of microburst induced airspeed variations. Throttle saturation and aerodynamic stall limits were not a problem for the case considered, proving that the aircraft's performance capabilities were not violated by the given wind field. All closed loop control laws previously considered performed very poorly in comparison, and therefore do not come near to taking full advantage of aircraft performance.
A forward method for optimal stochastic nonlinear and adaptive control
NASA Technical Reports Server (NTRS)
Bayard, David S.
1988-01-01
A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.
NASA Astrophysics Data System (ADS)
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
2016-10-01
We study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x-y) averaging, we also demonstrate the presence of large-scale fields when vertical (y-z) averaging is employed instead. By computing space-time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase - a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumber fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode-mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.
Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.
2016-01-01
We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157
NASA Astrophysics Data System (ADS)
Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.
2016-09-01
We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.
Handling inequality constraints in continuous nonlinear global optimization
Wang, Tao; Wah, B.W.
1996-12-31
In this paper, we present a new method to handle inequality constraints and apply it in NOVEL (Nonlinear Optimization via External Lead), a system we have developed for solving constrained continuous nonlinear optimization problems. In general, in applying Lagrange-multiplier methods to solve these problems, inequality constraints are first converted into equivalent equality constraints. One such conversion method adds a slack variable to each inequality constraint in order to convert it into an equality constraint. The disadvantage of this conversion is that when the search is inside a feasible region, some satisfied constraints may still pose a non-zero weight in the Lagrangian function, leading to possible oscillations and divergence when a local optimum lies on the boundary of a feasible region. We propose a new conversion method called the MaxQ method such that all satisfied constraints in a feasible region always carry zero weight in the Lagrange function; hence, minimizing the Lagrange function in a feasible region always leads to local minima of the objective function. We demonstrate that oscillations do not happen in our method. We also propose methods to speed up convergence when a local optimum lies on the boundary of a feasible region. Finally, we show improved experimental results in applying our proposed method in NOVEL on some existing benchmark problems and compare them to those obtained by applying the method based on slack variables.
Time-optimal quantum control of nonlinear two-level systems
NASA Astrophysics Data System (ADS)
Chen, Xi; Ban, Yue; Hegerfeldt, Gerhard C.
2016-08-01
Nonlinear two-level Landau-Zener type equations for systems with relevance for Bose-Einstein condensates and nonlinear optics are considered and the minimal time Tmin to drive an initial state to a given target state is investigated. Surprisingly, the nonlinearity may be canceled by a time-optimal unconstrained driving and Tmin becomes independent of the nonlinearity. For constrained and unconstrained driving explicit expressions are derived for Tmin, the optimal driving, and the protocol.
Large Scale Bacterial Colony Screening of Diversified FRET Biosensors
Litzlbauer, Julia; Schifferer, Martina; Ng, David; Fabritius, Arne; Thestrup, Thomas; Griesbeck, Oliver
2015-01-01
Biosensors based on Förster Resonance Energy Transfer (FRET) between fluorescent protein mutants have started to revolutionize physiology and biochemistry. However, many types of FRET biosensors show relatively small FRET changes, making measurements with these probes challenging when used under sub-optimal experimental conditions. Thus, a major effort in the field currently lies in designing new optimization strategies for these types of sensors. Here we describe procedures for optimizing FRET changes by large scale screening of mutant biosensor libraries in bacterial colonies. We describe optimization of biosensor expression, permeabilization of bacteria, software tools for analysis, and screening conditions. The procedures reported here may help in improving FRET changes in multiple suitable classes of biosensors. PMID:26061878
Reliability assessment for components of large scale photovoltaic systems
NASA Astrophysics Data System (ADS)
Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar
2014-10-01
Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.
Experimental Simulations of Large-Scale Collisions
NASA Technical Reports Server (NTRS)
Housen, Kevin R.
2002-01-01
This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.
Large-Scale PV Integration Study
Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris
2011-07-29
This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.
NASA Astrophysics Data System (ADS)
Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.
2017-01-01
We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.
Neutrinos and large-scale structure
Eisenstein, Daniel J.
2015-07-15
I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.
Large-scale planar lightwave circuits
NASA Astrophysics Data System (ADS)
Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok
2011-01-01
By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.
Large scale phononic metamaterials for seismic isolation
Aravantinos-Zafiris, N.; Sigalas, M. M.
2015-08-14
In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.
Large-scale Heterogeneous Network Data Analysis
2012-07-31
Data for Multi-Player Influence Maximization on Social Networks.” KDD 2012 (Demo). Po-Tzu Chang , Yen-Chieh Huang, Cheng-Lun Yang, Shou-De Lin, Pu...Jen Cheng. “Learning-Based Time-Sensitive Re-Ranking for Web Search.” SIGIR 2012 (poster) Hung -Che Lai, Cheng-Te Li, Yi-Chen Lo, and Shou-De Lin...Exploiting and Evaluating MapReduce for Large-Scale Graph Mining.” ASONAM 2012 (Full, 16% acceptance ratio). Hsun-Ping Hsieh , Cheng-Te Li, and Shou
Local gravity and large-scale structure
NASA Technical Reports Server (NTRS)
Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.
1990-01-01
The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.
Large-scale Globally Propagating Coronal Waves.
Warmuth, Alexander
Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.
Optimal spatiotemporal reduced order modeling for nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
LaBryer, Allen
Proposed in this dissertation is a novel reduced order modeling (ROM) framework called optimal spatiotemporal reduced order modeling (OPSTROM) for nonlinear dynamical systems. The OPSTROM approach is a data-driven methodology for the synthesis of multiscale reduced order models (ROMs) which can be used to enhance the efficiency and reliability of under-resolved simulations for nonlinear dynamical systems. In the context of nonlinear continuum dynamics, the OPSTROM approach relies on the concept of embedding subgrid-scale models into the governing equations in order to account for the effects due to unresolved spatial and temporal scales. Traditional ROMs neglect these effects, whereas most other multiscale ROMs account for these effects in ways that are inconsistent with the underlying spatiotemporal statistical structure of the nonlinear dynamical system. The OPSTROM framework presented in this dissertation begins with a general system of partial differential equations, which are modified for an under-resolved simulation in space and time with an arbitrary discretization scheme. Basic filtering concepts are used to demonstrate the manner in which residual terms, representing subgrid-scale dynamics, arise with a coarse computational grid. Models for these residual terms are then developed by accounting for the underlying spatiotemporal statistical structure in a consistent manner. These subgrid-scale models are designed to provide closure by accounting for the dynamic interactions between spatiotemporal macroscales and microscales which are otherwise neglected in a ROM. For a given resolution, the predictions obtained with the modified system of equations are optimal (in a mean-square sense) as the subgrid-scale models are based upon principles of mean-square error minimization, conditional expectations and stochastic estimation. Methods are suggested for efficient model construction, appraisal, error measure, and implementation with a couple of well-known time
Nonlinear Burn Control and Operating Point Optimization in ITER
NASA Astrophysics Data System (ADS)
Boyer, Mark; Schuster, Eugenio
2013-10-01
Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).
Statistical analysis of large-scale neuronal recording data
Reed, Jamie L.; Kaas, Jon H.
2010-01-01
Relating stimulus properties to the response properties of individual neurons and neuronal networks is a major goal of sensory research. Many investigators implant electrode arrays in multiple brain areas and record from chronically implanted electrodes over time to answer a variety of questions. Technical challenges related to analyzing large-scale neuronal recording data are not trivial. Several analysis methods traditionally used by neurophysiologists do not account for dependencies in the data that are inherent in multi-electrode recordings. In addition, when neurophysiological data are not best modeled by the normal distribution and when the variables of interest may not be linearly related, extensions of the linear modeling techniques are recommended. A variety of methods exist to analyze correlated data, even when data are not normally distributed and the relationships are nonlinear. Here we review expansions of the Generalized Linear Model designed to address these data properties. Such methods are used in other research fields, and the application to large-scale neuronal recording data will enable investigators to determine the variable properties that convincingly contribute to the variances in the observed neuronal measures. Standard measures of neuron properties such as response magnitudes can be analyzed using these methods, and measures of neuronal network activity such as spike timing correlations can be analyzed as well. We have done just that in recordings from 100-electrode arrays implanted in the primary somatosensory cortex of owl monkeys. Here we illustrate how one example method, Generalized Estimating Equations analysis, is a useful method to apply to large-scale neuronal recordings. PMID:20472395
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O. (Editor); Carmona, Edward A. (Editor)
1991-01-01
Recent advances in parallel methods and algorithms integrated into large-scale codes are presented. Consideration is given to problem decomposition (substructuring), efficient matrix solution algorithms for shared memory architectures, dynamic and transient analysis algorithms for shared memory architectures, and algorithms for distributed and massively parallel architectures. Particular attention is given to partitioning of unstructured problems for parallel processing, parallel-vector computation for linear-structural analysis and nonlinear unconstraint optimization problems, a parallel-vector equation solver for unsymmetric matrices on supercomputers, parallel nonlinear finite element dynamic response, multigrid algorithms for solving structural mechanics problems on supercomputers, structural analysis on massively parallel computers, explicit finite element methods with contact-impact on SIMD computers, and the impact of mapping and sparsity on parallelized finite element method modules.
Optimal operating points of oscillators using nonlinear resonators.
Kenig, Eyal; Cross, M C; Villanueva, L G; Karabalin, R B; Matheny, M H; Lifshitz, Ron; Roukes, M L
2012-11-01
We demonstrate an analytical method for calculating the phase sensitivity of a class of oscillators whose phase does not affect the time evolution of the other dynamic variables. We show that such oscillators possess the possibility for complete phase noise elimination. We apply the method to a feedback oscillator which employs a high Q weakly nonlinear resonator and provide explicit parameter values for which the feedback phase noise is completely eliminated and others for which there is no amplitude-phase noise conversion. We then establish an operational mode of the oscillator which optimizes its performance by diminishing the feedback noise in both quadratures, thermal noise, and quality factor fluctuations. We also study the spectrum of the oscillator and provide specific results for the case of 1/f noise sources.
Engineering management of large scale systems
NASA Technical Reports Server (NTRS)
Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.
1989-01-01
The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.
Large-scale parametric survival analysis.
Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S
2013-10-15
Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.
Primer design for large scale sequencing.
Haas, S; Vingron, M; Poustka, A; Wiemann, S
1998-06-15
We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects.
Large scale preparation of pure phycobiliproteins.
Padgett, M P; Krogmann, D W
1987-01-01
This paper describes simple procedures for the purification of large amounts of phycocyanin and allophycocyanin from the cyanobacterium Microcystis aeruginosa. A homogeneous natural bloom of this organism provided hundreds of kilograms of cells. Large samples of cells were broken by freezing and thawing. Repeated extraction of the broken cells with distilled water released phycocyanin first, then allophycocyanin, and provides supporting evidence for the current models of phycobilisome structure. The very low ionic strength of the aqueous extracts allowed allophycocyanin release in a particulate form so that this protein could be easily concentrated by centrifugation. Other proteins in the extract were enriched and concentrated by large scale membrane filtration. The biliproteins were purified to homogeneity by chromatography on DEAE cellulose. Purity was established by HPLC and by N-terminal amino acid sequence analysis. The proteins were examined for stability at various pHs and exposures to visible light.
Large-Scale Organization of Glycosylation Networks
NASA Astrophysics Data System (ADS)
Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong
2009-03-01
Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.
Efficient, large scale separation of coal macerals
Dyrkacz, G.R.; Bloomquist, C.A.A.
1988-01-01
The authors believe that the separation of macerals by continuous flow centrifugation offers a simple technique for the large scale separation of macerals. With relatively little cost (/approximately/ $10K), it provides an opportunity for obtaining quite pure maceral fractions. Although they have not completely worked out all the nuances of this separation system, they believe that the problems they have indicated can be minimized to pose only minor inconvenience. It cannot be said that this system completely bypasses the disagreeable tedium or time involved in separating macerals, nor will it by itself overcome the mental inertia required to make maceral separation an accepted necessary fact in fundamental coal science. However, they find their particular brand of continuous flow centrifugation is considerably faster than sink/float separation, can provide a good quality product with even one separation cycle, and permits the handling of more material than a conventional sink/float centrifuge separation.
Large scale cryogenic fluid systems testing
NASA Technical Reports Server (NTRS)
1992-01-01
NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.
Large Scale Quantum Simulations of Nuclear Pasta
NASA Astrophysics Data System (ADS)
Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian
2016-03-01
Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05
Primer design for large scale sequencing.
Haas, S; Vingron, M; Poustka, A; Wiemann, S
1998-01-01
We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects. PMID:9611248
Large scale study of tooth enamel
Bodart, F.; Deconninck, G.; Martin, M.Th.
1981-04-01
Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.
Modeling the Internet's large-scale topology
Yook, Soon-Hyung; Jeong, Hawoong; Barabási, Albert-László
2002-01-01
Network generators that capture the Internet's large-scale topology are crucial for the development of efficient routing protocols and modeling Internet traffic. Our ability to design realistic generators is limited by the incomplete understanding of the fundamental driving forces that affect the Internet's evolution. By combining several independent databases capturing the time evolution, topology, and physical layout of the Internet, we identify the universal mechanisms that shape the Internet's router and autonomous system level topology. We find that the physical layout of nodes form a fractal set, determined by population density patterns around the globe. The placement of links is driven by competition between preferential attachment and linear distance dependence, a marked departure from the currently used exponential laws. The universal parameters that we extract significantly restrict the class of potentially correct Internet models and indicate that the networks created by all available topology generators are fundamentally different from the current Internet. PMID:12368484
Large Scale Deformation Monitoring and Atmospheric Removal in Mexico City
NASA Astrophysics Data System (ADS)
McCardle, Adrian; McCardel, Jim; Ramos, Fernanda Ledo G.
2010-03-01
Large scale, accurate measurement of non-linear ground movement is required for monitoring applications pertaining to groundwater extraction, oil and gas production, and carbon capture and storage. Mexico City experiences severe subsidence as high as 35 centimeters per year due to continued exploitation of groundwater. Such extreme ground deformation has caused damage to infrastructure and many areas of the city are now subjected to periodic flooding. Furthermore, subsidence rates change seasonally creating a non-linear deformation signature manifesting over an area larger than 30 x 30 kilometers. The geographical location and climate of Mexico City, coupled with aforementioned subsidence characteristics create unique challenges for repeat-pass InSAR processing: Firstly, Mexico City is a tropical highland and experiences an oceanic climate that leads to significant temporal de-correlation. Secondly, the large magnitude subsidence leads to phase aliasing over coherent targets, particularly for interferograms with large temporal separation. Lastly, the expansive deformation is spatially correlated on scales similar to the long-range atmosphere, complicating the separation of the two signals. This paper discusses the results from the application of traditional DInSAR techniques combined with Multi-temporal InSAR Network Analysis processing algorithms to accurately identify and measure displacement, specifically in light of the challenges peculiar to Mexico City. Multi-temporal InSAR Network Analysis techniques are used to identify non-linear displacement and remove atmospheric noise from 38 ENVISAT images that were acquired over Mexico City from 2002 to 2007.
Voids in the Large-Scale Structure
NASA Astrophysics Data System (ADS)
El-Ad, Hagai; Piran, Tsvi
1997-12-01
Voids are the most prominent feature of the large-scale structure of the universe. Still, their incorporation into quantitative analysis of it has been relatively recent, owing essentially to the lack of an objective tool to identify the voids and to quantify them. To overcome this, we present here the VOID FINDER algorithm, a novel tool for objectively quantifying voids in the galaxy distribution. The algorithm first classifies galaxies as either wall galaxies or field galaxies. Then, it identifies voids in the wall-galaxy distribution. Voids are defined as continuous volumes that do not contain any wall galaxies. The voids must be thicker than an adjustable limit, which is refined in successive iterations. In this way, we identify the same regions that would be recognized as voids by the eye. Small breaches in the walls are ignored, avoiding artificial connections between neighboring voids. We test the algorithm using Voronoi tesselations. By appropriate scaling of the parameters with the selection function, we apply it to two redshift surveys, the dense SSRS2 and the full-sky IRAS 1.2 Jy. Both surveys show similar properties: ~50% of the volume is filled by voids. The voids have a scale of at least 40 h-1 Mpc and an average -0.9 underdensity. Faint galaxies do not fill the voids, but they do populate them more than bright ones. These results suggest that both optically and IRAS-selected galaxies delineate the same large-scale structure. Comparison with the recovered mass distribution further suggests that the observed voids in the galaxy distribution correspond well to underdense regions in the mass distribution. This confirms the gravitational origin of the voids.
Supporting large-scale computational science
Musick, R
1998-10-01
A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.
Improving Recent Large-Scale Pulsar Surveys
NASA Astrophysics Data System (ADS)
Cardoso, Rogerio Fernando; Ransom, S.
2011-01-01
Pulsars are unique in that they act as celestial laboratories for precise tests of gravity and other extreme physics (Kramer 2004). There are approximately 2000 known pulsars today, which is less than ten percent of pulsars in the Milky Way according to theoretical models (Lorimer 2004). Out of these 2000 known pulsars, approximately ten percent are known millisecond pulsars, objects used for their period stability for detailed physics tests and searches for gravitational radiation (Lorimer 2008). As the field and instrumentation progress, pulsar astronomers attempt to overcome observational biases and detect new pulsars, consequently discovering new millisecond pulsars. We attempt to improve large scale pulsar surveys by examining three recent pulsar surveys. The first, the Green Bank Telescope 350MHz Drift Scan, a low frequency isotropic survey of the northern sky, has yielded a large number of candidates that were visually inspected and identified, resulting in over 34.000 thousands candidates viewed, dozens of detections of known pulsars, and the discovery of a new low-flux pulsar, PSRJ1911+22. The second, the PALFA survey, is a high frequency survey of the galactic plane with the Arecibo telescope. We created a processing pipeline for the PALFA survey at the National Radio Astronomy Observatory in Charlottesville- VA, in addition to making needed modifications upon advice from the PALFA consortium. The third survey examined is a new GBT 820MHz survey devoted to find new millisecond pulsars by observing the target-rich environment of unidentified sources in the FERMI LAT catalogue. By approaching these three pulsar surveys at different stages, we seek to improve the success rates of large scale surveys, and hence the possibility for ground-breaking work in both basic physics and astrophysics.
Introducing Large-Scale Innovation in Schools
NASA Astrophysics Data System (ADS)
Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.
2016-08-01
Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.
Large-scale structure non-Gaussianities with modal methods
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel
2016-10-01
Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).
The dynamics of large-scale arrays of coupled resonators
NASA Astrophysics Data System (ADS)
Borra, Chaitanya; Pyles, Conor S.; Wetherton, Blake A.; Quinn, D. Dane; Rhoads, Jeffrey F.
2017-03-01
This work describes an analytical framework suitable for the analysis of large-scale arrays of coupled resonators, including those which feature amplitude and phase dynamics, inherent element-level parameter variation, nonlinearity, and/or noise. In particular, this analysis allows for the consideration of coupled systems in which the number of individual resonators is large, extending as far as the continuum limit corresponding to an infinite number of resonators. Moreover, this framework permits analytical predictions for the amplitude and phase dynamics of such systems. The utility of this analytical methodology is explored through the analysis of a system of N non-identical resonators with global coupling, including both reactive and dissipative components, physically motivated by an electromagnetically-transduced microresonator array. In addition to the amplitude and phase dynamics, the behavior of the system as the number of resonators varies is investigated and the convergence of the discrete system to the infinite-N limit is characterized.
Statistics of Caustics in Large-Scale Structure Formation
NASA Astrophysics Data System (ADS)
Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien
2016-10-01
The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.
NASA Astrophysics Data System (ADS)
Cai, Lanlan; Li, Peng; Luo, Qi; Zhai, Pengcheng; Zhang, Qingjie
2017-01-01
As no single thermoelectric material has presented a high figure-of-merit (ZT) over a very wide temperature range, segmented thermoelectric generators (STEGs), where the p- and n-legs are formed of different thermoelectric material segments joined in series, have been developed to improve the performance of thermoelectric generators. A crucial but difficult problem in a STEG design is to determine the optimal values of the geometrical parameters, like the relative lengths of each segment and the cross-sectional area ratio of the n- and p-legs. Herein, a multi-parameter and nonlinear optimization method, based on the Improved Powell Algorithm in conjunction with the discrete numerical model, was implemented to solve the STEG's geometrical optimization problem. The multi-parameter optimal results were validated by comparison with the optimal outcomes obtained from the single-parameter optimization method. Finally, the effect of the hot- and cold-junction temperatures on the geometry optimization was investigated. Results show that the optimal geometry parameters for maximizing the specific output power of a STEG are different from those for maximizing the conversion efficiency. Data also suggest that the optimal geometry parameters and the interfacial temperatures of the adjacent segments optimized for maximum specific output power or conversion efficiency vary with changing hot- and cold-junction temperatures. Through the geometry optimization, the CoSb3/Bi2Te3-based STEG can obtain a maximum specific output power up to 1725.3 W/kg and a maximum efficiency of 13.4% when operating at a hot-junction temperature of 823 K and a cold-junction temperature of 298 K.
NASA Astrophysics Data System (ADS)
Cai, Lanlan; Li, Peng; Luo, Qi; Zhai, Pengcheng; Zhang, Qingjie
2017-03-01
As no single thermoelectric material has presented a high figure-of-merit (ZT) over a very wide temperature range, segmented thermoelectric generators (STEGs), where the p- and n-legs are formed of different thermoelectric material segments joined in series, have been developed to improve the performance of thermoelectric generators. A crucial but difficult problem in a STEG design is to determine the optimal values of the geometrical parameters, like the relative lengths of each segment and the cross-sectional area ratio of the n- and p-legs. Herein, a multi-parameter and nonlinear optimization method, based on the Improved Powell Algorithm in conjunction with the discrete numerical model, was implemented to solve the STEG's geometrical optimization problem. The multi-parameter optimal results were validated by comparison with the optimal outcomes obtained from the single-parameter optimization method. Finally, the effect of the hot- and cold-junction temperatures on the geometry optimization was investigated. Results show that the optimal geometry parameters for maximizing the specific output power of a STEG are different from those for maximizing the conversion efficiency. Data also suggest that the optimal geometry parameters and the interfacial temperatures of the adjacent segments optimized for maximum specific output power or conversion efficiency vary with changing hot- and cold-junction temperatures. Through the geometry optimization, the CoSb3/Bi2Te3-based STEG can obtain a maximum specific output power up to 1725.3 W/kg and a maximum efficiency of 13.4% when operating at a hot-junction temperature of 823 K and a cold-junction temperature of 298 K.
A Nonlinear Fuel Optimal Reaction Jet Control Law
Breitfeller, E.; Ng, L.C.
2002-06-30
We derive a nonlinear fuel optimal attitude control system (ACS) that drives the final state to the desired state according to a cost function that weights the final state angular error relative to the angular rate error. Control is achieved by allowing the pulse-width-modulated (PWM) commands to begin and end anywhere within a control cycle, achieving a pulse width pulse time (PWPT) control. We show through a MATLAB{reg_sign} Simulink model that this steady-state condition may be accomplished, in the absence of sensor noise or model uncertainties, with the theoretical minimum number of actuator cycles. The ability to analytically achieve near-zero drift rates is particularly important in applications such as station-keeping and sensor imaging. Consideration is also given to the fact that, for relatively small sensor and model errors, the controller requires significantly fewer actuator cycles to reach the final state error than a traditional proportional-integral-derivative (PID) controller. The optimal PWPT attitude controller may be applicable for a high performance kinetic energy kill vehicle.
Large-scale Ising spin network based on degenerate optical parametric oscillators
NASA Astrophysics Data System (ADS)
Inagaki, Takahiro; Inaba, Kensuke; Hamerly, Ryan; Inoue, Kyo; Yamamoto, Yoshihisa; Takesue, Hiroki
2016-06-01
Solving combinatorial optimization problems is becoming increasingly important in modern society, where the analysis and optimization of unprecedentedly complex systems are required. Many such problems can be mapped onto the ground-state-search problem of the Ising Hamiltonian, and simulating the Ising spins with physical systems is now emerging as a promising approach for tackling such problems. Here, we report a large-scale network of artificial spins based on degenerate optical parametric oscillators (DOPOs), paving the way towards a photonic Ising machine capable of solving difficult combinatorial optimization problems. We generate >10,000 time-division-multiplexed DOPOs using dual-pump four-wave mixing in a highly nonlinear fibre placed in a cavity. Using those DOPOs, a one-dimensional Ising model is simulated by introducing nearest-neighbour optical coupling. We observe the formation of spin domains and find that the domain size diverges near the DOPO threshold, which suggests that the DOPO network can simulate the behaviour of low-temperature Ising spins.
Reconstructing Information in Large-Scale Structure via Logarithmic Mapping
NASA Astrophysics Data System (ADS)
Szapudi, Istvan
We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out
Ridzal, Danis
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.
Application of a Nonlinear Optimal Control Algorithm to Spacecraft and Airship Control
NASA Astrophysics Data System (ADS)
Fujii, Hironori A.; Kusagaya, Tairo; Watanabe, Takeo; An, Andrew
This paper presents a synthetic method that is based on both the algorithm of the geometry nonlinear feedback and nonlinear system optimal control of hierarchical differential feedback regulation. This method enables us to solve optimal feedback control problems without solving the Riccati Equations or adjoint vectors. Also, the method takes into consideration the avoidance of conjugate points, which is a important aspect of research in optimal control of nonlinear system. The present method is applied to two examples, one is a nonlinear attitude maneuver of spacecraft and the other is an airship optimal feedback tracking control. These applications have been studied numerically in order to show the performance of the present method applied to nonlinear optimal control for aerospace application.
NASA Astrophysics Data System (ADS)
Keselman, J. A.; Nusser, A.
2017-01-01
NoAM for "No Action Method" is a framework for reconstructing the past orbits of observed tracers of the large scale mass density field. It seeks exact solutions of the equations of motion (EoM), satisfying initial homogeneity and the final observed particle (tracer) positions. The solutions are found iteratively reaching a specified tolerance defined as the RMS of the distance between reconstructed and observed positions. Starting from a guess for the initial conditions, NoAM advances particles using standard N-body techniques for solving the EoM. Alternatively, the EoM can be replaced by any approximation such as Zel'dovich and second order perturbation theory (2LPT). NoAM is suitable for billions of particles and can easily handle non-regular volumes, redshift space, and other constraints. We implement NoAM to systematically compare Zel'dovich, 2LPT, and N-body dynamics over diverse configurations ranging from idealized high-res periodic simulation box to realistic galaxy mocks. Our findings are (i) Non-linear reconstructions with Zel'dovich, 2LPT, and full dynamics perform better than linear theory only for idealized catalogs in real space. For realistic catalogs, linear theory is the optimal choice for reconstructing velocity fields smoothed on scales {buildrel > over {˜}} 5 h^{-1}{Mpc}.(ii) all non-linear back-in-time reconstructions tested here, produce comparable enhancement of the baryonic oscillation signal in the correlation function.
Zhang, Songchuan; Xia, Youshen
2016-12-28
Much research has been devoted to complex-variable optimization problems due to their engineering applications. However, the complex-valued optimization method for solving complex-variable optimization problems is still an active research area. This paper proposes two efficient complex-valued optimization methods for solving constrained nonlinear optimization problems of real functions in complex variables, respectively. One solves the complex-valued nonlinear programming problem with linear equality constraints. Another solves the complex-valued nonlinear programming problem with both linear equality constraints and an ℓ₁-norm constraint. Theoretically, we prove the global convergence of the proposed two complex-valued optimization algorithms under mild conditions. The proposed two algorithms can solve the complex-valued optimization problem completely in the complex domain and significantly extend existing complex-valued optimization algorithms. Numerical results further show that the proposed two algorithms have a faster speed than several conventional real-valued optimization algorithms.
NASA Astrophysics Data System (ADS)
Swaidan, Waleeda; Hussin, Amran
2015-10-01
Most direct methods solve finite time horizon optimal control problems with nonlinear programming solver. In this paper, we propose a numerical method for solving nonlinear optimal control problem with state and control inequality constraints. This method used quasilinearization technique and Haar wavelet operational matrix to convert the nonlinear optimal control problem into a quadratic programming problem. The linear inequality constraints for trajectories variables are converted to quadratic programming constraint by using Haar wavelet collocation method. The proposed method has been applied to solve Optimal Control of Multi-Item Inventory Model. The accuracy of the states, controls and cost can be improved by increasing the Haar wavelet resolution.
Large-scale wind turbine structures
NASA Technical Reports Server (NTRS)
Spera, David A.
1988-01-01
The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.
Large-scale autostereoscopic outdoor display
NASA Astrophysics Data System (ADS)
Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich
2013-03-01
State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.
Large scale digital atlases in neuroscience
NASA Astrophysics Data System (ADS)
Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.
2014-03-01
Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.
Food appropriation through large scale land acquisitions
NASA Astrophysics Data System (ADS)
Rulli, Maria Cristina; D'Odorico, Paolo
2014-05-01
The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.
Large-scale carbon fiber tests
NASA Technical Reports Server (NTRS)
Pride, R. A.
1980-01-01
A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.
Maestro: An Orchestration Framework for Large-Scale WSN Simulations
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
Wall turbulence manipulation by large-scale streamwise vortices
NASA Astrophysics Data System (ADS)
Iuso, Gaetano; Onorato, Michele; Spazzini, Pier Giorgio; di Cicca, Gaetano Maria
2002-12-01
This paper describes an experimental study of the manipulation of a fully developed turbulent channel flow through large-scale streamwise vortices originated by vortex generator jets distributed along the wall in the spanwise direction. Apart from the interest in flow management itself, an important aim of the research is to observe the response of the flow to external perturbations as a technique for investigating the structure of turbulence. Considerable mean and fluctuating skin friction reductions, locally as high as 30% and 50% respectively, were measured for an optimal forcing flow intensity. Mean and fluctuating velocity profiles are also greatly modified by the manipulating large-scale vortices; in particular, attenuation of the turbulence intensity was measured. Moreover the flow manipulation caused an increase in longitudinal coherence of the wall organized motions, accompanied by a reduced frequency of burst events, demonstrated by a reduction of the velocity time derivative PDFs and by an higher intermittency. A strong transversal periodic organization of the flow field was observed, including some typical behaviours in each of the periodic boxes originated by the interaction of the vortex pairs. Results are interpreted and discussed in terms of management of the near-wall turbulent structures and with reference to the wall turbulence regeneration mechanisms suggested in the literature.
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-03-18
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.
Large Scale Flame Spread Environmental Characterization Testing
NASA Technical Reports Server (NTRS)
Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.
2013-01-01
Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation
Synchronization of coupled large-scale Boolean networks
NASA Astrophysics Data System (ADS)
Li, Fangfei
2014-03-01
This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.
Synchronization of coupled large-scale Boolean networks
Li, Fangfei
2014-03-15
This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.
NASA Technical Reports Server (NTRS)
Pavarini, C.
1974-01-01
Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.
Simulating the large-scale structure of HI intensity maps
Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel E-mail: aseem@iucaa.in E-mail: alexandre.refregier@phys.ethz.ch E-mail: joel.akeret@phys.ethz.ch
2016-03-01
Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 2048{sup 3} particles (particle mass 1.6 × 10{sup 11} M{sub ⊙} / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (10{sup 8} M{sub ⊙} / h < M{sub halo} < 10{sup 13} M{sub ⊙} / h), we assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 ∼< z ∼< 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.
Large-scale quantum photonic circuits in silicon
NASA Astrophysics Data System (ADS)
Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk
2016-08-01
Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards
Sheltering in buildings from large-scale outdoor releases
Chan, W.R.; Price, P.N.; Gadgil, A.J.
2004-06-01
Intentional or accidental large-scale airborne toxic release (e.g. terrorist attacks or industrial accidents) can cause severe harm to nearby communities. Under these circumstances, taking shelter in buildings can be an effective emergency response strategy. Some examples where shelter-in-place was successful at preventing injuries and casualties have been documented [1, 2]. As public education and preparedness are vital to ensure the success of an emergency response, many agencies have prepared documents advising the public on what to do during and after sheltering [3, 4, 5]. In this document, we will focus on the role buildings play in providing protection to occupants. The conclusions to this article are: (1) Under most circumstances, shelter-in-place is an effective response against large-scale outdoor releases. This is particularly true for release of short duration (a few hours or less) and chemicals that exhibit non-linear dose-response characteristics. (2) The building envelope not only restricts the outdoor-indoor air exchange, but can also filter some biological or even chemical agents. Once indoors, the toxic materials can deposit or sorb onto indoor surfaces. All these processes contribute to the effectiveness of shelter-in-place. (3) Tightening of building envelope and improved filtration can enhance the protection offered by buildings. Common mechanical ventilation system present in most commercial buildings, however, should be turned off and dampers closed when sheltering from an outdoor release. (4) After the passing of the outdoor plume, some residuals will remain indoors. It is therefore important to terminate shelter-in-place to minimize exposure to the toxic materials.
Scalable analysis of nonlinear systems using convex optimization
NASA Astrophysics Data System (ADS)
Papachristodoulou, Antonis
In this thesis, we investigate how convex optimization can be used to analyze different classes of nonlinear systems at various scales algorithmically. The methodology is based on the construction of appropriate Lyapunov-type certificates using sum of squares techniques. After a brief introduction on the mathematical tools that we will be using, we turn our attention to robust stability and performance analysis of systems described by Ordinary Differential Equations. A general framework for constrained systems analysis is developed, under which stability of systems with polynomial, non-polynomial vector fields and switching systems, as well estimating the region of attraction and the L2 gain can be treated in a unified manner. We apply our results to examples from biology and aerospace. We then consider systems described by Functional Differential Equations (FDEs), i.e., time-delay systems. Their main characteristic is that they are infinite dimensional, which complicates their analysis. We first show how the complete Lyapunov-Krasovskii functional can be constructed algorithmically for linear time-delay systems. Then, we concentrate on delay-independent and delay-dependent stability analysis of nonlinear FDEs using sum of squares techniques. An example from ecology is given. The scalable stability analysis of congestion control algorithms for the Internet is investigated next. The models we use result in an arbitrary interconnection of FDE subsystems, for which we require that stability holds for arbitrary delays, network topologies and link capacities. Through a constructive proof, we develop a Lyapunov functional for FAST---a recently developed network congestion control scheme---so that the Lyapunov stability properties scale with the system size. We also show how other network congestion control schemes can be analyzed in the same way. Finally, we concentrate on systems described by Partial Differential Equations. We show that axially constant perturbations of
Modelling large-scale halo bias using the bispectrum
NASA Astrophysics Data System (ADS)
Pollack, Jennifer E.; Smith, Robert E.; Porciani, Cristiano
2012-03-01
We study the relation between the density distribution of tracers for large-scale structure and the underlying matter distribution - commonly termed bias - in the Λ cold dark matter framework. In particular, we examine the validity of the local model of biasing at quadratic order in the matter density. This model is characterized by parameters b1 and b2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales. We find that, whilst the fits are reasonably good, the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no smoothing scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo and halo-mass power spectra and from these construct estimates of the effective large-scale bias as a guide for b1. We measure the configuration dependence of the halo bispectra Bhhh and reduced bispectra Qhhh for very large-scale k-space triangles. From these data, we constrain b1 and b2, taking into account the full bispectrum covariance matrix. Using the lowest order perturbation theory, we find that for Bhhh the best-fitting parameters are in reasonable agreement with one another as the triangle scale is varied, although the fits become poor as smaller scales are included. The same is true for Qhhh. The best-fitting values were found to depend on the discreteness correction. This led us to consider halo-mass cross-bispectra. The results from these statistics supported our earlier findings. We then developed a test to explore whether the inconsistency in the recovered bias parameters could be attributed to missing higher order corrections in the models. We prove that low-order expansions are not sufficiently accurate to model the data, even on scales k1˜ 0.04 h Mpc-1. If robust inferences concerning bias are to be drawn
Large scale structure from viscous dark matter
Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim E-mail: stefan.floerchinger@cern.ch E-mail: ntetrad@phys.uoa.gr
2015-11-01
Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale k{sub m} for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale k{sub m}, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.
Large scale structure from viscous dark matter
NASA Astrophysics Data System (ADS)
Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim
2015-11-01
Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale km for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale km, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.
Large-Scale Spacecraft Fire Safety Tests
NASA Technical Reports Server (NTRS)
Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde
2014-01-01
An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests
Atypical Behavior Identification in Large Scale Network Traffic
Best, Daniel M.; Hafen, Ryan P.; Olsen, Bryan K.; Pike, William A.
2011-10-23
Cyber analysts are faced with the daunting challenge of identifying exploits and threats within potentially billions of daily records of network traffic. Enterprise-wide cyber traffic involves hundreds of millions of distinct IP addresses and results in data sets ranging from terabytes to petabytes of raw data. Creating behavioral models and identifying trends based on those models requires data intensive architectures and techniques that can scale as data volume increases. Analysts need scalable visualization methods that foster interactive exploration of data and enable identification of behavioral anomalies. Developers must carefully consider application design, storage, processing, and display to provide usability and interactivity with large-scale data. We present an application that highlights atypical behavior in enterprise network flow records. This is accomplished by utilizing data intensive architectures to store the data, aggregation techniques to optimize data access, statistical techniques to characterize behavior, and a visual analytic environment to render the behavioral trends, highlight atypical activity, and allow for exploration.
A mini review: photobioreactors for large scale algal cultivation.
Gupta, Prabuddha L; Lee, Seung-Mok; Choi, Hee-Jeong
2015-09-01
Microalgae cultivation has gained much interest in terms of the production of foods, biofuels, and bioactive compounds and offers a great potential option for cleaning the environment through CO2 sequestration and wastewater treatment. Although open pond cultivation is most affordable option, there tends to be insufficient control on growth conditions and the risk of contamination. In contrast, while providing minimal risk of contamination, closed photobioreactors offer better control on culture conditions, such as: CO2 supply, water supply, optimal temperatures, efficient exposure to light, culture density, pH levels, and mixing rates. For a large scale production of biomass, efficient photobioreactors are required. This review paper describes general design considerations pertaining to photobioreactor systems, in order to cultivate microalgae for biomass production. It also discusses the current challenges in designing of photobioreactors for the production of low-cost biomass.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Optimal design of a class of nonlinear networks.
NASA Technical Reports Server (NTRS)
Peikari, B.
1972-01-01
The problem of synthesizing nth order nonlinear nonautonomous networks with a prescribed small signal behavior is considered. It is shown that, in the absence of coupling elements, the solution of this problem reduces to synthesizing a set of first order nonlinear characteristics. These characteristics can then be determined using a recently developed generalized steepest descent criterion.
"Cosmological Parameters from Large Scale Structure"
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
2005-01-01
This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.
Large-scale sparse singular value computations
NASA Technical Reports Server (NTRS)
Berry, Michael W.
1992-01-01
Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.
A Nonlinear Physics-Based Optimal Control Method for Magnetostrictive Actuators
NASA Technical Reports Server (NTRS)
Smith, Ralph C.
1998-01-01
This paper addresses the development of a nonlinear optimal control methodology for magnetostrictive actuators. At moderate to high drive levels, the output from these actuators is highly nonlinear and contains significant magnetic and magnetomechanical hysteresis. These dynamics must be accommodated by models and control laws to utilize the full capabilities of the actuators. A characterization based upon ferromagnetic mean field theory provides a model which accurately quantifies both transient and steady state actuator dynamics under a variety of operating conditions. The control method consists of a linear perturbation feedback law used in combination with an optimal open loop nonlinear control. The nonlinear control incorporates the hysteresis and nonlinearities inherent to the transducer and can be computed offline. The feedback control is constructed through linearization of the perturbed system about the optimal system and is efficient for online implementation. As demonstrated through numerical examples, the combined hybrid control is robust and can be readily implemented in linear PDE-based structural models.
Dynamics of large-scale instabilities in conductors electrically exploded in strong magnetic fields
NASA Astrophysics Data System (ADS)
Datsko, I. M.; Chaikovsky, S. A.; Labetskaya, N. A.; Oreshkin, V. I.; Ratakhin, N. A.
2014-11-01
The growth of large-scale instabilities during the propagation of a nonlinear magnetic diffusion wave through a conductor was studied experimentally. The experiment was carried out using the MIG terawatt pulsed power generator at a peak current up to 2.5 MA with 100 ns rise time. It was observed that instabilities with a wavelength of 150 μm developed on the surface of the conductor hollow part within 160 ns after the onset of current flow, whereas the surface of the solid rod remained almost unperturbed. A system of equations describing the propagation of a nonlinear diffusion wave through a conductor and the growth of thermal instabilities has been solved numerically. It has been revealed that the development of large- scale instabilities is obviously related to the propagation of a nonlinear magnetic diffusion wave.
Manifestations of dynamo driven large-scale magnetic field in accretion disks of compact objects
NASA Technical Reports Server (NTRS)
Chagelishvili, G. D.; Chanishvili, R. G.; Lominadze, J. G.; Sokhadze, Z. A.
1991-01-01
A turbulent dynamo nonlinear theory of turbulence was developed that shows that in the compact objects of accretion disks, the generated large-scale magnetic field (when the generation takes place) has a practically toroidal configuration. Its energy density can be much higher than turbulent pulsations energy density, and it becomes comparable with the thermal energy density of the medium. On this basis, the manifestations to which the large-scale magnetic field can lead at the accretion onto black holes and gravimagnetic rotators, respectively, are presented.
Population generation for large-scale simulation
NASA Astrophysics Data System (ADS)
Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul
2005-05-01
Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more
Large-scale assembly of colloidal particles
NASA Astrophysics Data System (ADS)
Yang, Hongta
This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the
SDI Large-Scale System Technology Study.
2007-11-02
Boost Phase Architecture Language Sensor Midcourse Producibility Algorithm Weapon Research Target BM C3 Technology Model Simulation AI Network SDI...problem is illustrated in Figures 7-3 through 7-7. The problem chosen for this example is the post-interceptor-launch midcourse - guidance problem, which...optimal interconnection of the blocks. The possible relevance to the post-interceptor-launch midcourse - guidance problem of the candidate architecture
Geospatial optimization of siting large-scale solar projects
Macknick, Jordan; Quinby, Ted; Caulfield, Emmet; Gerritsen, Margot; Diffendorfer, James E.; Haines, Seth S.
2014-01-01
guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.
A Large Scale Virtual Gas Sensor Array
NASA Astrophysics Data System (ADS)
Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre
2011-09-01
This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.
Optimal control of a satellite-robot system using direct collocation with non-linear programming
NASA Astrophysics Data System (ADS)
Coverstone-Carroll, V. L.; Wilkey, N. M.
1995-08-01
The non-holonomic behavior of a satellite-robot system is used to develop the system's equations of motion. The resulting non-linear differential equations are transformed into a non-linear programming problem using direct collocation. The link rates of the robot are minimized along optimal reorientations. Optimal solutions to several maneuvers are obtained and the results are interpreted to gain an understanding of the satellite-robot dynamics.
Large-Scale Sequential Quadratic Programming Algorithms
1992-09-01
KKT conditions for the QP subproblem shows that Pk(gk) = pk(VL). To see why, note that the KKT necessary and sufficient conditions ...number of active functional constraints at x*). Let Z* be a basis for the null space of A* (so that A*Z* = 0). The KKT necessary conditions for (x*, A... optimal (i.e. when the working set has identified the active set). Let g* FkP* +9k and omit the subscript k. Consider the first-order KKT conditions for
Multitree Algorithms for Large-Scale Astrostatistics
NASA Astrophysics Data System (ADS)
March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.
2012-03-01
this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being
Li, Hancao; Haddad, Wassim M.
2012-01-01
We develop optimal respiratory airflow patterns using a nonlinear multicompartment model for a lung mechanics system. Specifically, we use classical calculus of variations minimization techniques to derive an optimal airflow pattern for inspiratory and expiratory breathing cycles. The physiological interpretation of the optimality criteria used involves the minimization of work of breathing and lung volume acceleration for the inspiratory phase, and the minimization of the elastic potential energy and rapid airflow rate changes for the expiratory phase. Finally, we numerically integrate the resulting nonlinear two-point boundary value problems to determine the optimal airflow patterns over the inspiratory and expiratory breathing cycles. PMID:22719793
Li, Hancao; Haddad, Wassim M
2012-01-01
We develop optimal respiratory airflow patterns using a nonlinear multicompartment model for a lung mechanics system. Specifically, we use classical calculus of variations minimization techniques to derive an optimal airflow pattern for inspiratory and expiratory breathing cycles. The physiological interpretation of the optimality criteria used involves the minimization of work of breathing and lung volume acceleration for the inspiratory phase, and the minimization of the elastic potential energy and rapid airflow rate changes for the expiratory phase. Finally, we numerically integrate the resulting nonlinear two-point boundary value problems to determine the optimal airflow patterns over the inspiratory and expiratory breathing cycles.
Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.
Wang, Xinghu; Hong, Yiguang; Ji, Haibo
2016-07-01
The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.
Aircraft design for mission performance using nonlinear multiobjective optimization methods
NASA Technical Reports Server (NTRS)
Dovi, Augustine R.; Wrenn, Gregory A.
1990-01-01
A new technique which converts a constrained optimization problem to an unconstrained one where conflicting figures of merit may be simultaneously considered was combined with a complex mission analysis system. The method is compared with existing single and multiobjective optimization methods. A primary benefit from this new method for multiobjective optimization is the elimination of separate optimizations for each objective, which is required by some optimization methods. A typical wide body transport aircraft is used for the comparative studies.
Embedding based on function approximation for large scale image search.
Do, Thanh-Toan; Cheung, Ngai-Man
2017-03-23
The objective of this paper is to design an embedding method that maps local features describing an image (e.g. SIFT) to a higher dimensional representation useful for the image retrieval problem. First, motivated by the relationship between the linear approximation of a nonlinear function in high dimensional space and the stateof- the-art feature representation used in image retrieval, i.e., VLAD, we propose a new approach for the approximation. The embedded vectors resulted by the function approximation process are then aggregated to form a single representation for image retrieval. Second, in order to make the proposed embedding method applicable to large scale problem, we further derive its fast version in which the embedded vectors can be efficiently computed, i.e., in the closed-form. We compare the proposed embedding methods with the state of the art in the context of image search under various settings: when the images are represented by medium length vectors, short vectors, or binary vectors. The experimental results show that the proposed embedding methods outperform existing the state of the art on the standard public image retrieval benchmarks.
Large scale electromechanical transistor with application in mass sensing
Jin, Leisheng; Li, Lijie
2014-12-07
Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibration—an external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.
Bias in the effective field theory of large scale structures
Senatore, Leonardo
2015-11-05
We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local in space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. Furthermore, we describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/k_{NL} and k/k_{M}, where k is the wavenumber of interest, k_{NL} is the wavenumber associated to the non-linear scale, and k_{M} is the comoving wavenumber enclosing the mass of a galaxy.
Bias in the effective field theory of large scale structures
Senatore, Leonardo
2015-11-05
We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local inmore » space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. Furthermore, we describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/kNL and k/kM, where k is the wavenumber of interest, kNL is the wavenumber associated to the non-linear scale, and kM is the comoving wavenumber enclosing the mass of a galaxy.« less
Bias in the effective field theory of large scale structures
Senatore, Leonardo
2015-11-01
We study how to describe collapsed objects, such as galaxies, in the context of the Effective Field Theory of Large Scale Structures. The overdensity of galaxies at a given location and time is determined by the initial tidal tensor, velocity gradients and spatial derivatives of the regions of dark matter that, during the evolution of the universe, ended up at that given location. Similarly to what was recently done for dark matter, we show how this Lagrangian space description can be recovered by upgrading simpler Eulerian calculations. We describe the Eulerian theory. We show that it is perturbatively local in space, but non-local in time, and we explain the observational consequences of this fact. We give an argument for why to a certain degree of accuracy the theory can be considered as quasi time-local and explain what the operator structure is in this case. We describe renormalization of the bias coefficients so that, after this and after upgrading the Eulerian calculation to a Lagrangian one, the perturbative series for galaxies correlation functions results in a manifestly convergent expansion in powers of k/k{sub NL} and k/k{sub M}, where k is the wavenumber of interest, k{sub NL} is the wavenumber associated to the non-linear scale, and k{sub M} is the comoving wavenumber enclosing the mass of a galaxy.
Decentralization, stabilization, and estimation of large-scale linear systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Vukcevic, M. B.
1976-01-01
In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.
Statistics of density maxima and the large-scale matter distribution
NASA Technical Reports Server (NTRS)
Kaiser, N.
1986-01-01
High peaks in Gaussian noise display enhanced clustering. The enhancement takes two forms: on large scales one obtains a linear amplification of the correlation function which is independent of scale. On smaller scales, but larger than the mass scale of the peaks themselves, a nonlinear (exponential) enhancement of the number density of high peaks in overdense regions arises. The large-scale correlations of Abell's rich clusters can be understood as a manifestation of this phenomenon. If the formation of bright galaxies favors the high overdensity peaks then the number of galaxies (per unit mass) in clusters and groups may be considerably enhanced. Consequences of these ideas for the density parameter and the large-scale matter distribution are discussed.
Sufficient observables for large-scale structure in galaxy surveys
NASA Astrophysics Data System (ADS)
Carron, J.; Szapudi, I.
2014-03-01
Beyond the linear regime, the power spectrum and higher order moments of the matter field no longer capture all cosmological information encoded in density fluctuations. While non-linear transforms have been proposed to extract this information lost to traditional methods, up to now, the way to generalize these techniques to discrete processes was unclear; ad hoc extensions had some success. We pointed out in Carron and Szapudi's paper that the logarithmic transform approximates extremely well the optimal `sufficient statistics', observables that extract all information from the (continuous) matter field. Building on these results, we generalize optimal transforms to discrete galaxy fields. We focus our calculations on the Poisson sampling of an underlying lognormal density field. We solve and test the one-point case in detail, and sketch out the sufficient observables for the multipoint case. Moreover, we present an accurate approximation to the sufficient observables in terms of the mean and spectrum of a non-linearly transformed field. We find that the corresponding optimal non-linear transformation is directly related to the maximum a posteriori Bayesian reconstruction of the underlying continuous field with a lognormal prior as put forward in the paper of Kitaura et al.. Thus, simple recipes for realizing the sufficient observables can be built on previously proposed algorithms that have been successfully implemented and tested in simulations.
NASA Astrophysics Data System (ADS)
Farano, Mirko; Cherubini, Stefania; Robinet, Jean-Christophe; De Palma, Pietro
2016-12-01
Subcritical transition in plane Poiseuille flow is investigated by means of a Lagrange-multiplier direct-adjoint optimization procedure with the aim of finding localized three-dimensional perturbations optimally growing in a given time interval (target time). Space localization of these optimal perturbations (OPs) is achieved by choosing as objective function either a p-norm (with p\\gg 1) of the perturbation energy density in a linear framework; or the classical (1-norm) perturbation energy, including nonlinear effects. This work aims at analyzing the structure of linear and nonlinear localized OPs for Poiseuille flow, and comparing their transition thresholds and scenarios. The nonlinear optimization approach provides three types of solutions: a weakly nonlinear, a hairpin-like and a highly nonlinear optimal perturbation, depending on the value of the initial energy and the target time. The former shows localization only in the wall-normal direction, whereas the latter appears much more localized and breaks the spanwise symmetry found at lower target times. Both solutions show spanwise inclined vortices and large values of the streamwise component of velocity already at the initial time. On the other hand, p-norm optimal perturbations, although being strongly localized in space, keep a shape similar to linear 1-norm optimal perturbations, showing streamwise-aligned vortices characterized by low values of the streamwise velocity component. When used for initializing direct numerical simulations, in most of the cases nonlinear OPs provide the most efficient route to transition in terms of time to transition and initial energy, even when they are less localized in space than the p-norm OP. The p-norm OP follows a transition path similar to the oblique transition scenario, with slightly oscillating streaks which saturate and eventually experience secondary instability. On the other hand, the nonlinear OP rapidly forms large-amplitude bent streaks and skips the phases
Xia, Youshen; Feng, Gang; Wang, Jun
2008-08-01
This paper presents a novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. Under the condition that the Hessian matrix of the associated Lagrangian function is positive semidefinite, it is shown that the proposed neural network is stable at a Karush-Kuhn-Tucker point in the sense of Lyapunov and its output trajectory is globally convergent to a minimum solution. Compared with variety of the existing projection neural networks, including their extensions and modification, for solving such nonlinearly constrained optimization problems, it is shown that the proposed neural network can solve constrained convex optimization problems and a class of constrained nonconvex optimization problems and there is no restriction on the initial point. Simulation results show the effectiveness of the proposed neural network in solving nonlinearly constrained optimization problems.
NASA Astrophysics Data System (ADS)
Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick
2013-05-01
This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.
NASA Astrophysics Data System (ADS)
Royston, T. J.; Singh, R.
1996-07-01
While significant non-linear behavior has been observed in many vibration mounting applications, most design studies are typically based on the concept of linear system theory in terms of force or motion transmissibility. In this paper, an improved analytical strategy is presented for the design optimization of complex, active of passive, non-linear mounting systems. This strategy is built upon the computational Galerkin method of weighted residuals, and incorporates order reduction and numerical continuation in an iterative optimization scheme. The overall dynamic characteristics of the mounting system are considered and vibratory power transmission is minimized via adjustment of mount parameters by using both passive and active means. The method is first applied through a computational example case to the optimization of basic passive and active, non-linear isolation configurations. It is found that either active control or intentionally introduced non-linearity can improve the mount's performance; but a combination of both produces the greatest benefit. Next, a novel experimental, active, non-linear isolation system is studied. The effect of non-linearity on vibratory power transmission and active control are assessed via experimental measurements and the enhanced Galerkin method. Results show how harmonic excitation can result in multiharmonic vibratory power transmission. The proposed optimization strategy offers designers some flexibility in utilizing both passive and active means in combination with linear and non-linear components for improved vibration mounts.
SALSA - a Sectional Aerosol module for Large Scale Applications
NASA Astrophysics Data System (ADS)
Kokkola, H.; Korhonen, H.; Lehtinen, K. E. J.; Makkonen, R.; Asmi, A.; Järvenoja, S.; Anttila, T.; Partanen, A.-I.; Kulmala, M.; Järvinen, H.; Laaksonen, A.; Kerminen, V.-M.
2007-12-01
The sectional aerosol module SALSA is introduced. The model has been designed to be implemented in large scale climate models, which require both accuracy and computational efficiency. We have used multiple methods to reduce the computational burden of different aerosol processes to optimize the model performance without losing physical features relevant to problematics of climate importance. The optimizations include limiting the chemical compounds and physical processes available in different size sections of aerosol particles; division of the size distribution into size sections using size sections of variable width depending on the sensitivity of microphysical processing to the particles sizes; the total amount of size sections to describe the size distribution is kept to the minimum; furthermore, only the relevant microphysical processes affecting each size section are calculated. The ability of the module to describe different microphysical processes was evaluated against explicit microphysical models and several microphysical models used in air quality models. The results from the current module show good consistency when compared to more explicit models. Also, the module was used to simulate a new particle formation event typical in highly polluted conditions with comparable results to a more explicit model setup.
SALSA - a Sectional Aerosol module for Large Scale Applications
NASA Astrophysics Data System (ADS)
Kokkola, H.; Korhonen, H.; Lehtinen, K. E. J.; Makkonen, R.; Asmi, A.; Järvenoja, S.; Anttila, T.; Partanen, A.-I.; Kulmala, M.; Järvinen, H.; Laaksonen, A.; Kerminen, V.-M.
2008-05-01
The sectional aerosol module SALSA is introduced. The model has been designed to be implemented in large scale climate models, which require both accuracy and computational efficiency. We have used multiple methods to reduce the computational burden of different aerosol processes to optimize the model performance without losing physical features relevant to problematics of climate importance. The optimizations include limiting the chemical compounds and physical processes available in different size sections of aerosol particles; division of the size distribution into size sections using size sections of variable width depending on the sensitivity of microphysical processing to the particles sizes; the total amount of size sections to describe the size distribution is kept to the minimum; furthermore, only the relevant microphysical processes affecting each size section are calculated. The ability of the module to describe different microphysical processes was evaluated against explicit microphysical models and several microphysical models used in air quality models. The results from the current module show good consistency when compared to more explicit models. Also, the module was used to simulate a new particle formation event typical in highly polluted conditions with comparable results to more explicit model setup.
Large-scale structural monitoring systems
NASA Astrophysics Data System (ADS)
Solomon, Ian; Cunnane, James; Stevenson, Paul
2000-06-01
Extensive structural health instrumentation systems have been installed on three long-span cable-supported bridges in Hong Kong. The quantities measured include environment and applied loads (such as wind, temperature, seismic and traffic loads) and the bridge response to these loadings (accelerations, displacements, and strains). Measurements from over 1000 individual sensors are transmitted to central computing facilities via local data acquisition stations and a fault- tolerant fiber-optic network, and are acquired and processed continuously. The data from the systems is used to provide information on structural load and response characteristics, comparison with design, optimization of inspection, and assurance of continued bridge health. Automated data processing and analysis provides information on important structural and operational parameters. Abnormal events are noted and logged automatically. Information of interest is automatically archived for post-processing. Novel aspects of the instrumentation system include a fluid-based high-accuracy long-span Level Sensing System to measure bridge deck profile and tower settlement. This paper provides an outline of the design and implementation of the instrumentation system. A description of the design and implementation of the data acquisition and processing procedures is also given. Examples of the use of similar systems in monitoring other large structures are discussed.
Large Scale Turbulent Structures in Supersonic Jets
NASA Technical Reports Server (NTRS)
Rao, Ram Mohan; Lundgren, Thomas S.
1997-01-01
Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities
Large Scale Turbulent Structures in Supersonic Jets
NASA Technical Reports Server (NTRS)
Rao, Ram Mohan; Lundgren, Thomas S.
1997-01-01
Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean
Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm
NASA Astrophysics Data System (ADS)
Kania, Adhe; Sidarto, Kuntjoro Adji
2016-02-01
Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.
Information Tailoring Enhancements for Large-Scale Social Data
2016-06-15
Intelligent Automation Incorporated Information Tailoring Enhancements for Large-Scale...Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...also gathers information about entities from all news articles and displays it on over one million entity pages [5][6], and the information is made
Optimal design of linear and non-linear dynamic vibration absorbers
NASA Astrophysics Data System (ADS)
Jordanov, I. N.; Cheshankov, B. I.
1988-05-01
An efficient numerical method is applied to obtain optimal parameters for both linear and non-linear damped dynamic vibration absorbers. The minimization of the vibration response has been carried out for damped as well as undamped force excited primary systems with linear and non-linear spring characteristics. Comparison is made with the optimum absorber parameters that are determined by using Den Hartog's classical results in the linear case. Six optimization criteria by which the response is minimized over narrow and broad frequency bands are examined. Pareto optimal solutions of the multi-objective decision making problem are obtained.
Lossless Convexification of Control Constraints for a Class of Nonlinear Optimal Control Problems
NASA Technical Reports Server (NTRS)
Blackmore, Lars; Acikmese, Behcet; Carson, John M.,III
2012-01-01
In this paper we consider a class of optimal control problems that have continuous-time nonlinear dynamics and nonconvex control constraints. We propose a convex relaxation of the nonconvex control constraints, and prove that the optimal solution to the relaxed problem is the globally optimal solution to the original problem with nonconvex control constraints. This lossless convexification enables a computationally simpler problem to be solved instead of the original problem. We demonstrate the approach in simulation with a planetary soft landing problem involving a nonlinear gravity field.
Weak lensing of large scale structure in the presence of screening
Tessore, Nicolas; Metcalf, R. Benton; Giocoli, Carlo E-mail: hans.winther@astro.ox.ac.uk E-mail: pedro.ferreira@physics.ox.ac.uk
2015-10-01
A number of alternatives to general relativity exhibit gravitational screening in the non-linear regime of structure formation. We describe a set of algorithms that can produce weak lensing maps of large scale structure in such theories and can be used to generate mock surveys for cosmological analysis. By analysing a few basic statistics we indicate how these alternatives can be distinguished from general relativity with future weak lensing surveys.
An integrated optimal control algorithm for discrete-time nonlinear stochastic system
NASA Astrophysics Data System (ADS)
Kek, Sie Long; Lay Teo, Kok; Mohd Ismail, A. A.
2010-12-01
Consider a discrete-time nonlinear system with random disturbances appearing in the real plant and the output channel where the randomly perturbed output is measurable. An iterative procedure based on the linear quadratic Gaussian optimal control model is developed for solving the optimal control of this stochastic system. The optimal state estimate provided by Kalman filtering theory and the optimal control law obtained from the linear quadratic regulator problem are then integrated into the dynamic integrated system optimisation and parameter estimation algorithm. The iterative solutions of the optimal control problem for the model obtained converge to the solution of the original optimal control problem of the discrete-time nonlinear system, despite model-reality differences, when the convergence is achieved. An illustrative example is solved using the method proposed. The results obtained show the effectiveness of the algorithm proposed.
Effects of Design Properties on Parameter Estimation in Large-Scale Assessments
ERIC Educational Resources Information Center
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas
2015-01-01
The selection of an appropriate booklet design is an important element of large-scale assessments of student achievement. Two design properties that are typically optimized are the "balance" with respect to the positions the items are presented and with respect to the mutual occurrence of pairs of items in the same booklet. The purpose…
Finite dimensional approximation of a class of constrained nonlinear optimal control problems
NASA Technical Reports Server (NTRS)
Gunzburger, Max D.; Hou, L. S.
1994-01-01
An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.
Nonlinear optimization of buoyancy-driven ventilation flow
NASA Astrophysics Data System (ADS)
Nabi, Saleh; Grover, Piyush; Caulfield, C. P.
2016-11-01
We consider the optimization of buoyancy-driven flows governed by Boussinesq equations using the Direct-Adjoint-Looping method. We use incompressible Reynolds-averaged Navier-Stokes (RANS) equations, derive the corresponding adjoint equations and solve the resulting sensitivity equations with respect to inlet conditions. For validation, we solve a series of inverse-design problems, for which we recover known globally optimal solutions. For a displacement ventilation scenario with a line source, the numerical results are compared with analytically obtained optimal inlet conditions available from classical plume theory. Our results show that depending on Archimedes number, defined as the ratio of the inlet Reynolds number to the Rayleigh number associated with the plume, qualitatively different optimal solutions are obtained. For steady and transient plumes, and subject to an enthalpy constraint on the incoming flow, we identify boundary conditions leading to 'optimal' temperature distributions in the occupied zone.
Fracture-induced softening for large-scale ice dynamics
NASA Astrophysics Data System (ADS)
Albrecht, T.; Levermann, A.
2014-04-01
Floating ice shelves can exert a retentive and hence stabilizing force onto the inland ice sheet of Antarctica. However, this effect has been observed to diminish by the dynamic effects of fracture processes within the protective ice shelves, leading to accelerated ice flow and hence to a sea-level contribution. In order to account for the macroscopic effect of fracture processes on large-scale viscous ice dynamics (i.e., ice-shelf scale) we apply a continuum representation of fractures and related fracture growth into the prognostic Parallel Ice Sheet Model (PISM) and compare the results to observations. To this end we introduce a higher order accuracy advection scheme for the transport of the two-dimensional fracture density across the regular computational grid. Dynamic coupling of fractures and ice flow is attained by a reduction of effective ice viscosity proportional to the inferred fracture density. This formulation implies the possibility of non-linear threshold behavior due to self-amplified fracturing in shear regions triggered by small variations in the fracture-initiation threshold. As a result of prognostic flow simulations, sharp across-flow velocity gradients appear in fracture-weakened regions. These modeled gradients compare well in magnitude and location with those in observed flow patterns. This model framework is in principle expandable to grounded ice streams and provides simple means of investigating climate-induced effects on fracturing (e.g., hydro fracturing) and hence on the ice flow. It further constitutes a physically sound basis for an enhanced fracture-based calving parameterization.
Soft-Pion theorems for large scale structure
NASA Astrophysics Data System (ADS)
Horn, Bart; Hui, Lam; Xiao, Xiao
2014-09-01
Consistency relations — which relate an N-point function to a squeezed (N+1)-point function — are useful in large scale structure (LSS) because of their non-perturbative nature: they hold even if the N-point function is deep in the nonlinear regime, and even if they involve astrophysically messy galaxy observables. The non-perturbative nature of the consistency relations is guaranteed by the fact that they are symmetry statements, in which the velocity plays the role of the soft pion. In this paper, we address two issues: (1) how to derive the relations systematically using the residual coordinate freedom in the Newtonian gauge, and relate them to known results in ζ-gauge (often used in studies of inflation); (2) under what conditions the consistency relations are violated. In the non-relativistic limit, our derivation reproduces the Newtonian consistency relation discovered by Kehagias & Riotto and Peloso & Pietroni. More generally, there is an infinite set of consistency relations, as is known in ζ-gauge. There is a one-to-one correspondence between symmetries in the two gauges; in particular, the Newtonian consistency relation follows from the dilation and special conformal symmetries in ζ-gauge. We probe the robustness of the consistency relations by studying models of galaxy dynamics and biasing. We give a systematic list of conditions under which the consistency relations are violated; violations occur if the galaxy bias is non-local in an infrared divergent way. We emphasize the relevance of the adiabatic mode condition, as distinct from symmetry considerations. As a by-product of our investigation, we discuss a simple fluid Lagrangian for LSS.
Testing gravity using large-scale redshift-space distortions
NASA Astrophysics Data System (ADS)
Raccanelli, Alvise; Bertacca, Daniele; Pietrobon, Davide; Schmidt, Fabian; Samushia, Lado; Bartolo, Nicola; Doré, Olivier; Matarrese, Sabino; Percival, Will J.
2013-11-01
We use luminous red galaxies from the Sloan Digital Sky Survey (SDSS) II to test the cosmological structure growth in two alternatives to the standard Λ cold dark matter (ΛCDM)+general relativity (GR) cosmological model. We compare observed three-dimensional clustering in SDSS Data Release 7 (DR7) with theoretical predictions for the standard vanilla ΛCDM+GR model, unified dark matter (UDM) cosmologies and the normal branch Dvali-Gabadadze-Porrati (nDGP). In computing the expected correlations in UDM cosmologies, we derive a parametrized formula for the growth factor in these models. For our analysis we apply the methodology tested in Raccanelli et al. and use the measurements of Samushia et al. that account for survey geometry, non-linear and wide-angle effects and the distribution of pair orientation. We show that the estimate of the growth rate is potentially degenerate with wide-angle effects, meaning that extremely accurate measurements of the growth rate on large scales will need to take such effects into account. We use measurements of the zeroth and second-order moments of the correlation function from SDSS DR7 data and the Large Suite of Dark Matter Simulations (LasDamas), and perform a likelihood analysis to constrain the parameters of the models. Using information on the clustering up to rmax = 120 h-1 Mpc, and after marginalizing over the bias, we find, for UDM models, a speed of sound c∞ ≤ 6.1e-4, and, for the nDGP model, a cross-over scale rc ≥ 340 Mpc, at 95 per cent confidence level.
Analyzing large-scale proteomics projects with latent semantic indexing.
Klie, Sebastian; Martens, Lennart; Vizcaíno, Juan Antonio; Côté, Richard; Jones, Phil; Apweiler, Rolf; Hinneburg, Alexander; Hermjakob, Henning
2008-01-01
Since the advent of public data repositories for proteomics data, readily accessible results from high-throughput experiments have been accumulating steadily. Several large-scale projects in particular have contributed substantially to the amount of identifications available to the community. Despite the considerable body of information amassed, very few successful analyses have been performed and published on this data, leveling off the ultimate value of these projects far below their potential. A prominent reason published proteomics data is seldom reanalyzed lies in the heterogeneous nature of the original sample collection and the subsequent data recording and processing. To illustrate that at least part of this heterogeneity can be compensated for, we here apply a latent semantic analysis to the data contributed by the Human Proteome Organization's Plasma Proteome Project (HUPO PPP). Interestingly, despite the broad spectrum of instruments and methodologies applied in the HUPO PPP, our analysis reveals several obvious patterns that can be used to formulate concrete recommendations for optimizing proteomics project planning as well as the choice of technologies used in future experiments. It is clear from these results that the analysis of large bodies of publicly available proteomics data by noise-tolerant algorithms such as the latent semantic analysis holds great promise and is currently underexploited.
Scalable NIC-based reduction on large-scale clusters
Moody, A.; Fernández, J. C.; Petrini, F.; Panda, Dhabaleswar K.
2003-01-01
Many parallel algorithms require effiaent support for reduction mllectives. Over the years, researchers have developed optimal reduction algonduns by taking inm account system size, dam size, and complexities of reduction operations. However, all of these algorithm have assumed the faa that the reduction precessing takes place on the host CPU. Modem Network Interface Cards (NICs) sport programmable processors with substantial memory and thus introduce a fresh variable into the equation This raises the following intersting challenge: Can we take advantage of modern NICs to implementJost redudion operations? In this paper, we take on this challenge in the context of large-scale clusters. Through experiments on the 960-node, 1920-processor or ASCI Linux Cluster (ALC) located at the Lawrence Livermore National Laboratory, we show that NIC-based reductions indeed perform with reduced latency and immed consistency over host-based aleorithms for the wmmon case and that these benefits scale as the system grows. In the largest configuration tested--1812 processors-- our NIC-based algorithm can sum a single element vector in 73 ps with 32-bi integers and in 118 with Mbit floating-point numnbers. These results represent an improvement, respeaively, of 121% and 39% with resvect w the {approx}roductionle vel MPI library
Testing LSST Dither Strategies for Large-scale Structure Systematics
NASA Astrophysics Data System (ADS)
Awan, Humna; Gawiser, Eric J.; Kurczynski, Peter
2017-01-01
The Large Synoptic Survey Telescope (LSST) will start a ten-year survey of the southern sky in 2022. Since the telescope observing strategy can lead to artifacts in the observed data, we undertake an investigation of implementing large telescope-pointing offsets (called dithers) as a means to minimize the induced artifacts. We implement various types of dithers, varying in both implementation timescale and the dither geometry, and examine their effects on the r-band coadded depth after the 10-year survey. Then we propagate the depth fluctuations to galaxy counts fluctuations, which are a systematic for large-scale structure studies. We show that the observing strategies induce window function uncertainties which set a constraint on the level of information we can extract from an optimized survey to precisely measure Baryonic Acoustic Oscillations at high redshifts. We find that the best dither strategies lead to window function uncertainties well below the minimum statistical uncertainty after the 10 years of survey, hence not requiring any systematics correction methods. While the systematics level is considerably higher after the first year of the survey, dithering can play a critical role in reducing it. We also explore different cadences, and demonstrate that the best dither strategies minimize the window function uncertainties for various cadences.
Distribution probability of large-scale landslides in central Nepal
NASA Astrophysics Data System (ADS)
Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi
2014-12-01
Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.
On optimal performance of nonlinear energy sinks in multiple-degree-of-freedom systems
NASA Astrophysics Data System (ADS)
Tripathi, Astitva; Grover, Piyush; Kalmár-Nagy, Tamás
2017-02-01
We study the problem of optimizing the performance of a nonlinear spring-mass-damper attached to a class of multiple-degree-of-freedom systems. We aim to maximize the rate of one-way energy transfer from primary system to the attachment, and focus on impulsive excitation of a two-degree-of-freedom primary system with an essentially nonlinear attachment. The nonlinear attachment is shown to be able to perform as a 'nonlinear energy sink' (NES) by taking away energy from the primary system irreversibly for some types of impulsive excitations. Using perturbation analysis and exploiting separation of time scales, we perform dimensionality reduction of this strongly nonlinear system. Our analysis shows that efficient energy transfer to nonlinear attachment in this system occurs for initial conditions close to homoclinic orbit of the slow time-scale undamped system, a phenomenon that has been previously observed for the case of single-degree-of-freedom primary systems. Analytical formulae for optimal parameters for given impulsive excitation input are derived. Generalization of this framework to systems with arbitrary number of degrees-of-freedom of the primary system is also discussed. The performance of both linear and nonlinear optimally tuned attachments is compared. While NES performance is sensitive to magnitude of the initial impulse, our results show that NES performance is more robust than linear tuned mass damper to several parametric perturbations. Hence, our work provides evidence that homoclinic orbits of the underlying Hamiltonian system play a crucial role in efficient nonlinear energy transfers, even in high dimensional systems, and gives new insight into robustness of systems with essential nonlinearity.
Algorithmic Approximation of Optimal Value Differential Stability Bounds in Nonlinear Programming,
1981-08-01
NCLASSIFIED RANO/PA6659 N IN *~4 112.0.0 ~11111,.. I32 111 IIIII 111111.25 MICROCOPY RESOLUTION TESI CHART NATIOt AL BJRLAU Of SIANDARD 1964 A * LEVEL 00 o pm...Sensitivity Analysis in Parametric Nonlinear Programming, Doctoral Dissertation, School of Engineering and Applied Science, The George Washington University...Differential Stability of the Optimal Value Function in Constrained Nonlinear Programing, Doctoral Disser- tation, School of Engineering and Applied
Large scale stochastic spatio-temporal modelling with PCRaster
NASA Astrophysics Data System (ADS)
Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.
2013-04-01
software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.
Local and Regional Impacts of Large Scale Wind Energy Deployment
NASA Astrophysics Data System (ADS)
Michalakes, J.; Hammond, S.; Lundquist, J. K.; Moriarty, P.; Robinson, M.
2010-12-01
resources and upscaling large scale wind farm impact on local and regional climate. It will bridge localized and larger scale interactions of renewable energy generation with energy resource and grid management system control. By 2030, when 20 percent wind energy penetration is planned and exascale computing resources have become commonplace, we envision such a system spanning the entire mesoscale to sub-millimeter range of scales to provide a real-time computational and systems control capability to optimize renewable based generating and grid distribution for efficiency and with minimizing environmental impact.
State of the Art in Large-Scale Soil Moisture Monitoring
NASA Technical Reports Server (NTRS)
Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; Zreda, Marek G.
2013-01-01
Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.
Optimal nonlinear estimation for aircraft flight control in wind shear
NASA Technical Reports Server (NTRS)
Mulgund, Sandeep S.
1994-01-01
The most recent results in an ongoing research effort at Princeton in the area of flight dynamics in wind shear are described. The first undertaking in this project was a trajectory optimization study. The flight path of a medium-haul twin-jet transport aircraft was optimized during microburst encounters on final approach. The assumed goal was to track a reference climb rate during an aborted landing, subject to a minimum airspeed constraint. The results demonstrated that the energy loss through the microburst significantly affected the qualitative nature of the optimal flight path. In microbursts of light to moderate strength, the aircraft was able to track the reference climb rate successfully. In severe microbursts, the minimum airspeed constraint in the optimization forced the aircraft to settle on a climb rate smaller than the target. A tradeoff was forced between the objectives of flight path tracking and stall prevention.
Optimization of Water Distribution and Water Quality by Genetic Algorithm and Nonlinear Programming
NASA Astrophysics Data System (ADS)
Tu, M.; Tsai, F. T.; Yeh, W. W.
2001-12-01
When managing a regional water distribution system, it is not only important to optimize water allocation but also to meet the desired water quality requirements. This paper develops a multicommodity flow model that can be used to optimize water distribution and water quality in a regional water supply system. Waters from different sources with different quality are considered as distinct commodities, which concurrently share a single water distribution system. Volumetric water blend is used to represent water quality in the proposed model. The multicommodity model is capable of handling two-way flow pipes, as represented undirectional arcs, and the perfect mixing condition. Additionally, blending requirements are specified at certain control nodes within the water distribution system to ensure that downstream users receive the desired water quality. The developed multicommodity flow model is imbedded in a nonlinear optimization model. To reduce nonlinearity and to improve convergence, GA is combined with a gradient-based-algorithm to solve the nonlinearly constrained optimization model in that GA is used to search for the optimal direction for all undirectional arcs in the system and iteratively linked with a nonlinear programming solver. The proposed methodology was first tested and verified on a simplified hypothetical system and then applied to the regional water distribution system of the Metropolitan Water District of Southern California. The results obtained indicate that the optimization model can efficiently allocate waters from different sources with different quality to satisfy the blending requirements, the perfect mixing and two-way flow conditions.
On stochastic optimal control of partially observable nonlinear quasi Hamiltonian systems.
Zhu, Wei-qiu; Ying, Zu-guang
2004-11-01
A stochastic optimal control strategy for partially observable nonlinear quasi Hamiltonian systems is proposed. The optimal control forces consist of two parts. The first part is determined by the conditions under which the stochastic optimal control problem of a partially observable nonlinear system is converted into that of a completely observable linear system. The second part is determined by solving the dynamical programming equation derived by applying the stochastic averaging method and stochastic dynamical programming principle to the completely observable linear control system. The response of the optimally controlled quasi Hamiltonian system is predicted by solving the averaged Fokker-Planck-Kolmogorov equation associated with the optimally controlled completely observable linear system and solving the Riccati equation for the estimated error of system states. An example is given to illustrate the procedure and effectiveness of the proposed control strategy.
Comfort improvement of a nonlinear suspension using global optimization and in situ measurements
NASA Astrophysics Data System (ADS)
Deprez, K.; Moshou, D.; Ramon, H.
2005-06-01
The health problems encountered by operators of off-road vehicles demonstrate that a lot of effort still has to be put into the design of effective seat and cabin suspensions. Owing to the nonlinear nature of the suspensions and the use of in situ measurements for the optimization, classical local optimization techniques are prone to getting stuck in local minima. Therefore this paper develops a method for optimizing nonlinear suspension systems based on in situ measurements, using the global optimization technique DIRECT to avoid local minima. Evaluation of the comfort improvement of the suspension was carried out using the objective comfort parameters used in standards. As a test case, the optimization of a hydropneumatic element that can serve as part of a cabin suspension for off-road machinery was performed.
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
Issues in optimal parameter estimation for the nonlinear Muskingum flood routing model
NASA Astrophysics Data System (ADS)
Geem, Zong Woo
2014-03-01
This study answers two questions raised in the parameter estimation optimization for the nonlinear Muskingum flood routing model. The first question is whether a new global optimum was still found after the existing global optimum had already been found. In order to fairly verify this question, a standard routing procedure for the nonlinear Muskingum model, which has not been clearly described previously, is proposed. Because the routing procedure was coded in a spreadsheet, any researcher can easily test it after downloading it. The second question is the reason why various approaches, such as Lagrange multiplier, Broyden-Fletcher-Goldfarb-Shanno (BFGS), genetic algorithm, harmony search and particle swarm optimization, have tackled only Wilson's data set as the parameter estimation optimization for the nonlinear Muskingum model, because Wilson's data have a unique structure which is differentiated from other data sets. This study also provides various data sets to compare.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
Optimization of the dynamic behavior of strongly nonlinear heterogeneous materials
NASA Astrophysics Data System (ADS)
Herbold, Eric B.
New aspects of strongly nonlinear wave and structural phenomena in granular media are developed numerically, theoretically and experimentally. One-dimensional chains of particles and compressed powder composites are the two main types of materials considered here. Typical granular assemblies consist of linearly elastic spheres or layers of masses and effective nonlinear springs in one-dimensional columns for dynamic testing. These materials are highly sensitive to initial and boundary conditions, making them useful for acoustic and shock-mitigating applications. One-dimensional assemblies of spherical particles are examples of strongly nonlinear systems with unique properties. For example, if initially uncompressed, these materials have a sound speed equal to zero (sonic vacuum), supporting strongly nonlinear compression solitary waves with a finite width. Different types of assembled metamaterials will be presented with a discussion of the material's response to static compression. The acoustic diode effect will be presented, which may be useful in shock mitigation applications. Systems with controlled dissipation will also be discussed from an experimental and theoretical standpoint emphasizing the critical viscosity that defines the transition from an oscillatory to monotonous shock profile. The dynamic compression of compressed powder composites may lead to self-organizing mesoscale structures in two and three dimensions. A reactive granular material composed of a compressed mixture of polytetrafluoroethylene (PTFE), tungsten (W) and aluminum (Al) fine-grain powders exhibit this behavior. Quasistatic, Hopkinson bar, and drop-weight experiments show that composite materials with a high porosity and fine metallic particles exhibit a higher strength than less porous mixtures with larger particles, given the same mass fraction of constituents. A two-dimensional Eulerian hydrocode is implemented to investigate the mechanical deformation and failure of the compressed
Optimal Fitting of Non-linear Detector Pulses with Nonstationary Noise
NASA Technical Reports Server (NTRS)
Fixsen, D. J.; Moseley, S. H.; Cabera, B.; Figueroa-Felicianco, E.; Oegerle, William (Technical Monitor)
2002-01-01
Optimal extraction of pulses of constant known shape from a time series with stationary noise is well understood and widely used in detection applications. Applications where high resolution is required over a wide range of input signal amplitudes use much of the dynamic range of the sensor. The noise will in general vary over this signal range, and the response may be a nonlinear function of the energy input. We present an optimal least squares procedure for inferring input energy in such a detector with nonstationary noise and nonlinear energy response.
Learning networks for sustainable, large-scale improvement.
McCannon, C Joseph; Perla, Rocco J
2009-05-01
Large-scale improvement efforts known as improvement networks offer structured opportunities for exchange of information and insights into the adaptation of clinical protocols to a variety of settings.
Modified gravity and large scale flows, a review
NASA Astrophysics Data System (ADS)
Mould, Jeremy
2017-02-01
Large scale flows have been a challenging feature of cosmography ever since galaxy scaling relations came on the scene 40 years ago. The next generation of surveys will offer a serious test of the standard cosmology.
Needs, opportunities, and options for large scale systems research
Thompson, G.L.
1984-10-01
The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.
Optimization of Nonlinear Transport-Production Task of Medical Waste
NASA Astrophysics Data System (ADS)
Michlowicz, Edward
2012-09-01
The paper reflects on optimization of transportation - production tasks for the processing of medical waste. For the existing network of collection points and processing plants, according to its algorithm, the optimal allocation of tasks to the cost of transport to the respective plants has to be determined. It was assumed that the functions determining the processing costs are polynomials of the second degree. To solve the problem, a program written in MatLab environment equalization algorithm based on a marginal cost JCC was used.
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Disentangling the dynamic core: a research program for a neurodynamics at the large-scale.
Le Van Quyen, Michel
2003-01-01
My purpose in this paper is to sketch a research direction based on Francisco Varela's pioneering work in neurodynamics (see also Rudrauf et al. 2003, in this issue). Very early on he argued that the internal coherence of every mental-cognitive state lies in the global self-organization of the brain activities at the large-scale, constituting a fundamental pole of integration called here a "dynamic core". Recent neuroimaging evidence appears to broadly support this hypothesis and suggests that a global brain dynamics emerges at the large scale level from the cooperative interactions among widely distributed neuronal populations. Despite a growing body of evidence supporting this view, our understanding of these large-scale brain processes remains hampered by the lack of a theoretical language for expressing these complex behaviors in dynamical terms. In this paper, I propose a rough cartography of a comprehensive approach that offers a conceptual and mathematical framework to analyze spatio-temporal large-scale brain phenomena. I emphasize how these nonlinear methods can be applied, what property might be inferred from neuronal signals, and where one might productively proceed for the future. This paper is dedicated, with respect and affection, to the memory of Francisco Varela.
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-01-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R(sub p) is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes Q(sub J) at large scales, r is greater than or approximately R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Simplified radiation and convection treatments for large- scale tropical atmospheric modeling
NASA Astrophysics Data System (ADS)
Chou, Chia
1997-05-01
A physical parameterization package is developed for intermediate tropical atmospheric models, i.e., models slightly less complex than full general circulation models (GCMs). This package includes a linearized longwave radiation scheme, a simplified parameterization for surface solar radiation, and a cloudiness prediction scheme. A quantity that measures the net large-scale vertical stratification in deep convective regions, the gross moist stability, is estimated from observations. Using a Green's function method, the longwave radiation scheme is linearized from a fully nonlinear scheme used in GCMs. This includes the radiative flux dependence on large-scale variables, such as temperature, moisture, cloud fraction, and cloud top. A comparison with the fully nonlinear scheme in simulating tropical climatology, seasonal variations, and interannual variability is carried out using the observed large-scale variables as input. For these applications, the linearized scheme accurately reproduces the nonlinear results, and it can be easily applied in atmospheric models. The simplified solar radiation scheme is used to calculate surface solar irradiance as a function of cloud fraction and solar zenith angle. Cloud optical thickness is fixed for each cloud type, and cloud albedo is assumed to depend linearly on solar zenith angle. Comparison is made with two satellite-derived data sets. The cloudiness prediction scheme consists of empirical relations for cloudiness associated with deep convection, and is appropriate for long Reynolds-averaging intervals. Deep cloud can be estimated by large-scale precipitation in the tropics. Deep cloud and cirrostratus/cirrocumulus corresponding to tower and anvil clouds have a linear relation. Cirrus cloud fraction is calculated by a 2-D prognostic cloud ice budget equation. A deep-cloud-top- temperature postulate is used for parameterizing the cirrus source. The data analysis yields the physical hypothesis that deep cloud top temperature
A study of MLFMA for large-scale scattering problems
NASA Astrophysics Data System (ADS)
Hastriter, Michael Larkin
This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.
NASA Astrophysics Data System (ADS)
Hocker, David; Yan, Julia; Rabitz, Herschel
2016-05-01
Bose-Einstein condensates (BECs) offer the potential to examine quantum behavior at large length and time scales, as well as forming promising candidates for quantum technology applications. Thus, the manipulation of BECs using control fields is a topic of prime interest. We consider BECs in the mean-field model of the Gross-Pitaevskii equation (GPE), which contains linear and nonlinear features, both of which are subject to control. In this work we report successful optimal control simulations of a one-dimensional GPE by modulation of the linear and nonlinear terms to stimulate transitions into excited coherent modes. The linear and nonlinear controls are allowed to freely vary over space and time to seek their optimal forms. The determination of the excited coherent modes targeted for optimization is numerically performed through an adaptive imaginary time propagation method. Numerical simulations are performed for optimal control of mode-to-mode transitions between the ground coherent mode and the excited modes of a BEC trapped in a harmonic well. The results show greater than 99 % success for nearly all trials utilizing reasonable initial guesses for the controls, and analysis of the optimal controls reveals primarily direct transitions between initial and target modes. The success of using solely the nonlinearity term as a control opens up further research toward exploring novel control mechanisms inaccessible to linear Schrödinger-type systems.
Optimal finite-thrust spacecraft trajectories using direct transcription and nonlinear programming
NASA Astrophysics Data System (ADS)
Enright, Paul James
1991-08-01
A class of methods for the numerical solution of optimal control problems is analyzed and applied to the optimization of finite-thrust spacecraft trajectories. These methods use discrete approximations to the state and control histories, and a discretization of the equations of motion to derive a mathematical programming problem which approximates the optimal control problem, and which is solved numerically. This conversion is referred to as transcription. Recent advances in nonlinear programming, however, have made it feasible to solve the original heavily-constrained nonlinear programming problem, which is referred to as the direct transcription of the optimal control problem. This method is referred to as direct transcription and nonlinear programming. A recently developed method for solving optimal trajectory problems uses a piecewise-polynomial representation of the state and control variables and enforces the equations of motion via a collocation procedure, resulting in a nonlinear programming problem, which is solved numerically. This method is identified as being of the general class of direct transcription methods described above. Also, a new direct transcription method which discretizes the equations of motion using a parallel-shooting approach is developed. Both methods are applied to thrust-limited spacecraft trajectory problems, including finite-thrust transfer, rendezvous, and orbit insertion, a low-thrust escape, and a low-thrust Earth-moon transfer.
NASA Astrophysics Data System (ADS)
Tian, Y. P.; Wang, Y.; Jin, X. L.; Huang, Z. L.
2014-09-01
A nonlinear electromagnetic energy harvester directly powering a load resistance is considered in this manuscript. The nonlinearity includes the cubic stiffness and the unavoidable Coulomb friction, and the base excitation is confined to Gaussian white noise. Directly starting from the coupled equations, a novel procedure to evaluate the random responses and the mean output power is developed through the generalized harmonic transformation and the equivalent non-linearization technique. The dependence of the optimal ratio of the load resistance to the internal resistance and the associated optimal mean output power on the internal resistance of the coil is established. The principle of impedance matching is correct only when the internal resistance is infinity, and the optimal mean output power approaches an upper limit as the internal resistance is close to zero. The influence of the Coulomb friction on the optimal resistance ratio and the optimal mean output power is also investigated. It is proved that the Coulomb friction almost does not change the optimal resistance ratio although it prominently reduces the optimal mean output power.
Optimal and Suboptimal Estimation of Nonlinear Stochastic Systems.
1984-01-09
Ir OBSOLETE u!J: LALlJ’]EI SCLIRIT’t (". ArSIF ICAI ION Or TH.IS PAGE %." UCLASSIFIED ITEM #19, ABSTRACT, CONTINUED: the optimal decentralized gain for...CONTROL, New Delhi, India , January 1982. 20. B. Hanzon and S.I. Marcus, "Riemannian Metrics on Spaces of Stable Linear Systems, With Applications to
NASA Technical Reports Server (NTRS)
Stahara, S. S.
1984-01-01
An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.
NASA Technical Reports Server (NTRS)
Li, Xiaofan; Finkbeiner, Joshua; Raman, Ganesh; Daniels, Christopher; Steinetz, Bruce M.
2003-01-01
Optimizing resonator shapes for maximizing the ratio of maximum to minimum gas pressure at an end of the resonator is investigated numerically. It is well known that the resonant frequencies and the nonlinear standing waveform in an acoustical resonator strongly depend on the resonator geometry. A quasi-Newton type scheme was used to find optimized axisymmetric resonator shapes achieving the maximum pressure compression ratio with an acceleration of constant amplitude. The acoustical field was solved using a one-dimensional model, and the resonance frequency shift and hysteresis effects were obtained through an automation scheme based on continuation method. Results are presented for optimizing three types of geometry: a cone, a horn-cone and a half cosine-shape. For each type, different optimized shapes were found when starting with different initial guesses. Further, the one-dimensional model was modified to study the effect of an axisymmetric central blockage on the nonlinear standing wave.
Using nonlinear optimization methods to reverse engineer liner material properties from EFP tests
Murphy, M.J.; Baker, E.L.
1995-02-27
The utility of variable metric nonlinear optimization methods for reverse engineering liner material constitutive modeling parameters is described. We use an effective new code created by coupling the nonlinear optimization code NLQPEB with the DYNA2D finite element hydrocode. The optimization code determines the ``best`` set of liner material properties by running DYNA2D in a loop, varying the liner model constitutive parameters, and minimizing the difference between the EFP profiles of the calculation and experiment. The results of four different EFP warhead tests with the same copper liner material are used to determine material parameters for the Steinberg-Guinan, Johnson-Cook, & Armstrong-Zerilli models. In a companion paper we describe the successful application of this methodology to the forward engineering of liner contours to achieve desired EFP shapes. The methodology of utilizing a coupled optimization/finite element code provides a significant improvement in warhead designs and the warhead design process.
Subdifferential of Optimal Value Functions in Nonlinear Infinite Programming
Huy, N. Q. Giang, N. D.; Yao, J.-C.
2012-02-15
This paper presents an exact formula for computing the normal cones of the constraint set mapping including the Clarke normal cone and the Mordukhovich normal cone in infinite programming under the extended Mangasarian-Fromovitz constraint qualification condition. Then, we derive an upper estimate as well as an exact formula for the limiting subdifferential of the marginal/optimal value function in a general Banach space setting.
A composite Chebyshev finite difference method for nonlinear optimal control problems
NASA Astrophysics Data System (ADS)
Marzban, H. R.; Hoseini, S. M.
2013-06-01
In this paper, a composite Chebyshev finite difference method is introduced and is successfully employed for solving nonlinear optimal control problems. The proposed method is an extension of the Chebyshev finite difference scheme. This method can be regarded as a non-uniform finite difference scheme and is based on a hybrid of block-pulse functions and Chebyshev polynomials using the well-known Chebyshev-Gauss-Lobatto points. The convergence of the method is established. The nice properties of hybrid functions are then used to convert the nonlinear optimal control problem into a nonlinear mathematical programming one that can be solved efficiently by a globally convergent algorithm. The validity and applicability of the proposed method are demonstrated through some numerical examples. The method is simple, easy to implement and yields very accurate results.
Non-linear modelling and optimal control of a hydraulically actuated seismic isolator test rig
NASA Astrophysics Data System (ADS)
Pagano, Stefano; Russo, Riccardo; Strano, Salvatore; Terzo, Mario
2013-02-01
This paper investigates the modelling, parameter identification and control of an unidirectional hydraulically actuated seismic isolator test rig. The plant is characterized by non-linearities such as the valve dead zone and frictions. A non-linear model is derived and then employed for parameter identification. The results concerning the model validation are illustrated and they fully confirm the effectiveness of the proposed model. The testing procedure of the isolation systems is based on the definition of a target displacement time history of the sliding table and, consequently, the precision of the table positioning is of primary importance. In order to minimize the test rig tracking error, a suitable control system has to be adopted. The system non-linearities highly limit the performances of the classical linear control and a non-linear one is therefore adopted. The test rig mathematical model is employed for a non-linear control design that minimizes the error between the target table position and the current one. The controller synthesis is made by taking no specimen into account. The proposed approach consists of a non-linear optimal control based on the state-dependent Riccati equation (SDRE). Numerical simulations have been performed in order to evaluate the soundness of the designed control with and without the specimen under test. The results confirm that the performances of the proposed non-linear controller are not invalidated because of the presence of the specimen.
Application of multi-objective nonlinear optimization technique for coordinated ramp-metering
Haj Salem, Habib; Farhi, Nadir; Lebacque, Jean Patrick E-mail: nadir.frahi@ifsttar.fr
2015-03-10
This paper aims at developing a multi-objective nonlinear optimization algorithm applied to coordinated motorway ramp metering. The multi-objective function includes two components: traffic and safety. Off-line simulation studies were performed on A4 France Motorway including 4 on-ramps.
A hybrid symbolic/finite-element algorithm for solving nonlinear optimal control problems
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1991-01-01
The general code described is capable of solving difficult nonlinear optimal control problems by using finite elements and a symbolic manipulator. Quick and accurate solutions are obtained with a minimum for user interaction. Since no user programming is required for most problems, there are tremendous savings to be gained in terms of time and money.
Nonlinear stability in reaction-diffusion systems via optimal Lyapunov functions
NASA Astrophysics Data System (ADS)
Lombardo, S.; Mulone, G.; Trovato, M.
2008-06-01
We define optimal Lyapunov functions to study nonlinear stability of constant solutions to reaction-diffusion systems. A computable and finite radius of attraction for the initial data is obtained. Applications are given to the well-known Brusselator model and a three-species model for the spatial spread of rabies among foxes.
Report of the Workshop on Petascale Systems Integration for LargeScale Facilities
Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin
2007-10-01
There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.
NASA Astrophysics Data System (ADS)
Chasalevris, Athanasios; Dohnal, Fadi
2014-03-01
In large scale rotating machinery the resonance amplitude during the passage through resonance is a matter of consideration because of its influence in the surrounding environment of the rotational system and foundation. In this paper, a variable geometry journal bearing (VGJB), recently patented, is applied for the mounting of a large scale rotor bearing system operating at the range of medium speed. The simulation of the rotor-bearing system incorporates a recent method for simulation of a multi-segment continuous rotor in combination with nonlinear bearing forces. The use of the current bearing gives results that encourage the use of such a bearing in rotating machinery since the vibration amplitude during the passage through the critical speed can be reduced up to 60-70%. In the presented study, the developed amplitude and the rotor stresses are severely reduced compared to those of the system with normal cylindrical journal bearings during a virtual start up of the system.
Generation and saturation of large-scale flows in flute turbulence
Sandberg, I.; Isliker, H.; Pavlenko, V. P.; Hizanidis, K.; Vlahos, L.
2005-03-01
The excitation and suppression of large-scale anisotropic modes during the temporal evolution of a magnetic-curvature-driven electrostatic flute instability are numerically investigated. The formation of streamerlike structures is attributed to the linear development of the instability while the subsequent excitation of the zonal modes is the result of the nonlinear coupling between linearly grown flute modes. When the amplitudes of the zonal modes become of the same order as that of the streamer modes, the flute instabilities get suppressed and poloidal (zonal) flows dominate. In the saturated state that follows, the dominant large-scale modes of the potential and the density are self-organized in different ways, depending on the value of the ion temperature.
Robust Optimal Stopping-Time Control for Nonlinear Systems
Ball, J.A.; Chudoung, J.; Day, M.V.
2002-10-01
We formulate a robust optimal stopping-time problem for a state-space system and give the connection between various notions of lower value function for the associated games (and storage function for the associated dissipative system) with solutions of the appropriate variational inequality (VI) (the analogue of the Hamilton-Jacobi-Bellman-Isaacs equation for this setting). We show that the stopping-time rule can be obtained by solving the VI in the viscosity sense and a positive definite supersolution of the VI can be used for stability analysis.
Recursive architecture for large-scale adaptive system
NASA Astrophysics Data System (ADS)
Hanahara, Kazuyuki; Sugiyama, Yoshihiko
1994-09-01
'Large scale' is one of major trends in the research and development of recent engineering, especially in the field of aerospace structural system. This term expresses the large scale of an artifact in general, however, it also implies the large number of the components which make up the artifact in usual. Considering a large scale system which is especially used in remote space or deep-sea, such a system should be adaptive as well as robust by itself, because its control as well as maintenance by human operators are not easy due to the remoteness. An approach to realizing this large scale, adaptive and robust system is to build the system as an assemblage of components which are respectively adaptive by themselves. In this case, the robustness of the system can be achieved by using a large number of such components and suitable adaptation as well as maintenance strategies. Such a system gathers many research's interest and their studies such as decentralized motion control, configurating algorithm and characteristics of structural elements are reported. In this article, a recursive architecture concept is developed and discussed towards the realization of large scale system which consists of a number of uniform adaptive components. We propose an adaptation strategy based on the architecture and its implementation by means of hierarchically connected processing units. The robustness and the restoration from degeneration of the processing unit are also discussed. Two- and three-dimensional adaptive truss structures are conceptually designed based on the recursive architecture.
NASA Astrophysics Data System (ADS)
Siade, A. J.; Prommer, H.; Welter, D.
2014-12-01
Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site
Dondo, Rodolfo; Marqués, Dardo
2003-04-01
The computation of optimal control profiles for batch bioreactors is based on the use of simple and empirical dynamic models. Since these models present some level of uncertainty, the difference between the model dynamics and the reactor dynamics can have significant effects in the reliability of the calculated profile. To develop near optimal control trajectories considering this drawback, we propose to calculate successive control profiles on a moving time horizon using a mathematical model in which the kinetic parameters are estimated by an observer. The desired objective is to generate a near optimal control trajectory adapted to the "running" fermentation. This idea results in a nonlinear estimator plus an optimizer arrangement that so far has not been applied to batch fermentors. Numerical simulations are performed on xanthan-gum batch fermentations and reasonably good results are obtained.
Method for nonlinear optimization for gas tagging and other systems
Chen, T.; Gross, K.C.; Wegerich, S.
1998-01-06
A method and system are disclosed for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established. 6 figs.
Method for nonlinear optimization for gas tagging and other systems
Chen, Ting; Gross, Kenny C.; Wegerich, Stephan
1998-01-01
A method and system for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established.
Large-Scale Coronal Heating from "Cool" Activity in the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.
1999-01-01
In Fe XII images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi-supergranular (large-scale corona). In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. Taken together, the coronal network emission and bright point emission are only about 5% of the entire quiet solar coronal Fe XII emission. Here we investigate the relationship between the large-scale corona and the network as seen in three different EIT filters (He II, Fe IX-X, and Fe XII). Using the median-brightness contour, we divide the large-scale Fe XII corona into dim and bright halves, and find that the bright-half/dim half brightness ratio is about 1.5. We also find that the bright half relative to the dim half has 10 times greater total bright point Fe XII emission, 3 times greater Fe XII network emission, 2 times greater Fe IX-X network emission, 1.3 times greater He II network emission, and has 1.5 times more magnetic flux. Also, the cooler network (He II) radiates an order of magnitude more energy than the hotter coronal network (Fe IX-X, and Fe XII). From these results we infer that: 1) The heating of the network and the heating of the large-scale corona each increase roughly linearly with the underlying magnetic flux. 2) The production of network coronal bright points and heating of the coronal network each increase nonlinearly with the magnetic flux. 3) The heating of the large-scale corona is driven by widespread cooler network activity rather than by the exceptional network activity that produces the network coronal bright points and the coronal network. 4) The large-scale corona is heated by a nonthermal process since the driver of its heating is cooler than it is. This work was funded by the Solar Physics Branch of NASA's office of
Large-scale simulations of complex physical systems
NASA Astrophysics Data System (ADS)
Belić, A.
2007-04-01
Scientific computing has become a tool as vital as experimentation and theory for dealing with scientific challenges of the twenty-first century. Large scale simulations and modelling serve as heuristic tools in a broad problem-solving process. High-performance computing facilities make possible the first step in this process - a view of new and previously inaccessible domains in science and the building up of intuition regarding the new phenomenology. The final goal of this process is to translate this newly found intuition into better algorithms and new analytical results. In this presentation we give an outline of the research themes pursued at the Scientific Computing Laboratory of the Institute of Physics in Belgrade regarding large-scale simulations of complex classical and quantum physical systems, and present recent results obtained in the large-scale simulations of granular materials and path integrals.
Large-scale velocity structures in turbulent thermal convection.
Qiu, X L; Tong, P
2001-09-01
A systematic study of large-scale velocity structures in turbulent thermal convection is carried out in three different aspect-ratio cells filled with water. Laser Doppler velocimetry is used to measure the velocity profiles and statistics over varying Rayleigh numbers Ra and at various spatial positions across the whole convection cell. Large velocity fluctuations are found both in the central region and near the cell boundary. Despite the large velocity fluctuations, the flow field still maintains a large-scale quasi-two-dimensional structure, which rotates in a coherent manner. This coherent single-roll structure scales with Ra and can be divided into three regions in the rotation plane: (1) a thin viscous boundary layer, (2) a fully mixed central core region with a constant mean velocity gradient, and (3) an intermediate plume-dominated buffer region. The experiment reveals a unique driving mechanism for the large-scale coherent rotation in turbulent convection.
Acoustic Studies of the Large Scale Ocean Circulation
NASA Technical Reports Server (NTRS)
Menemenlis, Dimitris
1999-01-01
Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.
Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows
Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R
2014-01-01
High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.
PKI security in large-scale healthcare networks.
Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos
2012-06-01
During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.
Mayorga, René V; Arriaga, Mariano
2007-10-01
In this article, a novel technique for non-linear global optimization is presented. The main goal is to find the optimal global solution of non-linear problems avoiding sub-optimal local solutions or inflection points. The proposed technique is based on a two steps concept: properly keep decreasing the value of the objective function, and calculating the corresponding independent variables by approximating its inverse function. The decreasing process can continue even after reaching local minima and, in general, the algorithm stops when converging to solutions near the global minimum. The implementation of the proposed technique by conventional numerical methods may require a considerable computational effort on the approximation of the inverse function. Thus, here a novel Artificial Neural Network (ANN) approach is implemented to reduce the computational requirements of the proposed optimization technique. This approach is successfully tested on some highly non-linear functions possessing several local minima. The results obtained demonstrate that the proposed approach compares favorably over some current conventional numerical (Matlab functions) methods, and other non-conventional (Evolutionary Algorithms, Simulated Annealing) optimization methods.
Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong
2015-04-01
Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Nonlinear dynamical systems of fed-batch fermentation and their optimal control
NASA Astrophysics Data System (ADS)
Liu, Chongyang; Gong, Zhaohua; Feng, Enmin; Yin, Hongchao
2012-05-01
In this article, we propose a controlled nonlinear dynamical system with variable switching instants, in which the feeding rate of glycerol is regarded as the control function and the moments between the batch and feeding processes as switching instants, to formulate the fed-batch fermentation of glycerol bioconversion to 1,3-propanediol (1,3-PD). Some important properties of the proposed system and its solution are then discussed. Taking the concentration of 1,3-PD at the terminal time as the cost functional, we establish an optimal control model involving the controlled nonlinear dynamical system and subject to continuous state inequality constraints. The existence of the optimal control is also proved. A computational approach is constructed on the basis of constraint transcription and smoothing approximation techniques. Numerical results show that, by employing the optimal control strategy, the concentration of 1,3-PD at the terminal time can be increased considerably.
A nonlinear optimization approach for UPFC power flow control and voltage security
NASA Astrophysics Data System (ADS)
Kalyani, Radha Padma
This dissertation provides a nonlinear optimization algorithm for the long term control of Unified Power Flow Controller (UPFC) to remove overloads and voltage violations by optimized control of power flows and voltages in the power network. It provides a control strategy for finding the long term control settings of one or more UPFCs by considering all the possible settings and all the (N-1) topologies of a power network. Also, a simple evolutionary algorithm (EA) has been proposed for the placement of more than one UPFC in large power systems. In this publication dissertation, Paper 1 proposes the algorithm and provides the mathematical and empirical evidence. Paper 2 focuses on comparing the proposed algorithm with Linear Programming (LP) based corrective method proposed in literature recently and mitigating cascading failures in larger power systems. EA for placement along with preliminary results of the nonlinear optimization is given in Paper 3.
NASA Astrophysics Data System (ADS)
Yang, Xiong; Liu, Derong; Wang, Ding
2014-03-01
In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.
Magnetic Helicity and Large Scale Magnetic Fields: A Primer
NASA Astrophysics Data System (ADS)
Blackman, Eric G.
2015-05-01
Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.
Large scale purification of RNA nanoparticles by preparative ultracentrifugation.
Jasinski, Daniel L; Schwartz, Chad T; Haque, Farzin; Guo, Peixuan
2015-01-01
Purification of large quantities of supramolecular RNA complexes is of paramount importance due to the large quantities of RNA needed and the purity requirements for in vitro and in vivo assays. Purification is generally carried out by liquid chromatography (HPLC), polyacrylamide gel electrophoresis (PAGE), or agarose gel electrophoresis (AGE). Here, we describe an efficient method for the large-scale purification of RNA prepared by in vitro transcription using T7 RNA polymerase by cesium chloride (CsCl) equilibrium density gradient ultracentrifugation and the large-scale purification of RNA nanoparticles by sucrose gradient rate-zonal ultracentrifugation or cushioned sucrose gradient rate-zonal ultracentrifugation.
The Evolution of Baryons in Cosmic Large Scale Structure
NASA Astrophysics Data System (ADS)
Snedden, Ali; Arielle Phillips, Lara; Mathews, Grant James; Coughlin, Jared; Suh, In-Saeng; Bhattacharya, Aparna
2015-01-01
The environments of galaxies play a critical role in their formation and evolution. We study these environments using cosmological simulations with star formation and supernova feedback included. From these simulations, we parse the large scale structure into clusters, filaments and voids using a segmentation algorithm adapted from medical imaging. We trace the star formation history, gas phase and metal evolution of the baryons in the intergalactic medium as function of structure. We find that our algorithm reproduces the baryon fraction in the intracluster medium and that the majority of star formation occurs in cold, dense filaments. We present the consequences this large scale environment has for galactic halos and galaxy evolution.
Large-Scale Graph Processing Analysis using Supercomputer Cluster
NASA Astrophysics Data System (ADS)
Vildario, Alfrido; Fitriyani; Nugraha Nurkahfi, Galih
2017-01-01
Graph implementation is widely use in various sector such as automotive, traffic, image processing and many more. They produce graph in large-scale dimension, cause the processing need long computational time and high specification resources. This research addressed the analysis of implementation large-scale graph using supercomputer cluster. We impelemented graph processing by using Breadth-First Search (BFS) algorithm with single destination shortest path problem. Parallel BFS implementation with Message Passing Interface (MPI) used supercomputer cluster at High Performance Computing Laboratory Computational Science Telkom University and Stanford Large Network Dataset Collection. The result showed that the implementation give the speed up averages more than 30 times and eficiency almost 90%.
Corridors Increase Plant Species Richness at Large Scales
Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.
2006-09-01
Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.
Clearing and Labeling Techniques for Large-Scale Biological Tissues
Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon
2016-01-01
Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813
NASA Astrophysics Data System (ADS)
Wöhling, T.; Geiges, A.; Gosses, M.; Nowak, W.
2014-12-01
Data acquisition in complex environmental systems is typically expensive. Therefore, experimental designs should be optimized such that most can be learned about the system at least costs. In the past, optimal design (OD) analyses were mainly restricted to linear or linearized problems and methods. Nonlinear OD methods offer more efficient data collection strategies, because they can better handle the non-linearity exhibited by most coupled environmental systems. However, the much higher computational demand restricts their applicability to models with comparatively low run-times. Our goal is to compare the trade-off between computational efficiency and the obtainable design quality between linear and nonlinear OD methods. In our study, a steady-state model for a section of the river Steinlach (South Germany) was set up and calibrated to measured groundwater head data and on estimated groundwater exchange fluxes. The model involves a Pilot Point parameterization scheme for hydraulic conductivity and six zones with uncertain river bed conductivities. In the linear OD approach, the initial predictive uncertainty of groundwater exchange fluxes and mean travel times are estimated using the PREDUNC utility (Moore and Doherty 2005) of PEST. The parameter calibration was performed with a non-linear global search. A discrete global search method and PREDUNC was then utilized to identify augmented monitoring strategies (additional n measurement locations and data types) that reduce the predictive uncertainty the most. For the nonlinear assessment, a conditional ensemble obtained with Markov-chain Monte Carlo represents the initial state of uncertainty and is used as input to a nonlinear OD framework called PreDIA (Leube et al. 2012). PreDIA can consider any kind of uncertainties and non-linear (statistical) dependencies in data, models, parameters and system drivers during the OD process. The linear and non-linear approaches are compared thoroughly during each step of the
Nonlinear Analysis and Optimal Design of Dynamic Mechanical Systems for Spacecraft Application.
1987-09-01
links were considered flexible, and thus a S-.quasi- static (linear) finite element analysis was used to obtain deformations and stresses. The GRG...Alga ൴ NONLINEAR ANALYSIS AMD OPTINALfDSIGN OFDYNAMIC l’ MECHANICAL SYSTEMS FOR SPACECRAFT APPLICATION(U) CLARKSON_ NIV POTSDAM NY K D MILINERT ET...TEST CHART .’.. Nŕ. :4. St 0 0 0 0 0 0 0 0 0 0 0 5 0. 6 0." UJEILL cony Io RIALFinal Report AFOSR 84-0076 * DE7H Nonlinear Analysis and Optimal
Solution algorithms for non-linear singularly perturbed optimal control problems
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1983-01-01
The applicability and usefulness of several classical and other methods for solving the two-point boundary-value problem which arises in non-linear singularly perturbed optimal control are assessed. Specific algorithms of the Picard, Newton and averaging types are formally developed for this class of problem. The computational requirements associated with each algorithm are analysed and compared with the computational requirement of the method of matched asymptotic expansions. Approximate solutions to a linear and a non-linear problem are obtained by each method and compared.
The Effective Field Theory of Large Scale Structures at two loops
Carrasco, John Joseph M.; Foreman, Simon; Green, Daniel; Senatore, Leonardo E-mail: sfore@stanford.edu E-mail: senatore@stanford.edu
2014-07-01
Large scale structure surveys promise to be the next leading probe of cosmological information. It is therefore crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbation theory for the weakly non-linear regime of dark matter, where correlation functions are computed in an expansion of the wavenumber k of a mode over the wavenumber associated with the non-linear scale k{sub NL}. Since most of the information is contained at high wavenumbers, it is necessary to compute higher order corrections to correlation functions. After the one-loop correction to the matter power spectrum, we estimate that the next leading one is the two-loop contribution, which we compute here. At this order in k/k{sub NL}, there is only one counterterm in the EFTofLSS that must be included, though this term contributes both at tree-level and in several one-loop diagrams. We also discuss correlation functions involving the velocity and momentum fields. We find that the EFTofLSS prediction at two loops matches to percent accuracy the non-linear matter power spectrum at redshift zero up to k∼ 0.6 h Mpc{sup −1}, requiring just one unknown coefficient that needs to be fit to observations. Given that Standard Perturbation Theory stops converging at redshift zero at k∼ 0.1 h Mpc{sup −1}, our results demonstrate the possibility of accessing a factor of order 200 more dark matter quasi-linear modes than naively expected. If the remaining observational challenges to accessing these modes can be addressed with similar success, our results show that there is tremendous potential for large scale structure surveys to explore the primordial universe.
Large-scale search for dark-matter axions
Kinion, D; van Bibber, K
2000-08-30
We review the status of two ongoing large-scale searches for axions which may constitute the dark matter of our Milky Way halo. The experiments are based on the microwave cavity technique proposed by Sikivie, and marks a ''second-generation'' to the original experiments performed by the Rochester-Brookhaven-Fermilab collaboration, and the University of Florida group.
Measurement, Sampling, and Equating Errors in Large-Scale Assessments
ERIC Educational Resources Information Center
Wu, Margaret
2010-01-01
In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…
Resilience of Florida Keys coral communities following large scale disturbances
The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...
Large-Scale Machine Learning for Classification and Search
ERIC Educational Resources Information Center
Liu, Wei
2012-01-01
With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…
Efficient On-Demand Operations in Large-Scale Infrastructures
ERIC Educational Resources Information Center
Ko, Steven Y.
2009-01-01
In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…
Assuring Quality in Large-Scale Online Course Development
ERIC Educational Resources Information Center
Parscal, Tina; Riemer, Deborah
2010-01-01
Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…
Large-Scale Assessments and Educational Policies in Italy
ERIC Educational Resources Information Center
Damiani, Valeria
2016-01-01
Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…
Improving the Utility of Large-Scale Assessments in Canada
ERIC Educational Resources Information Center
Rogers, W. Todd
2014-01-01
Principals and teachers do not use large-scale assessment results because the lack of distinct and reliable subtests prevents identifying strengths and weaknesses of students and instruction, the results arrive too late to be used, and principals and teachers need assistance to use the results to improve instruction so as to improve student…
Current Scientific Issues in Large Scale Atmospheric Dynamics
NASA Technical Reports Server (NTRS)
Miller, T. L. (Compiler)
1986-01-01
Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.
Large-Scale Innovation and Change in UK Higher Education
ERIC Educational Resources Information Center
Brown, Stephen
2013-01-01
This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…
Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround
ERIC Educational Resources Information Center
Peurach, Donald J.; Neumerski, Christine M.
2015-01-01
The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…
Large-Scale Environmental Influences on Aquatic Animal Health
In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...
A bibliographical surveys of large-scale systems
NASA Technical Reports Server (NTRS)
Corliss, W. R.
1970-01-01
A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.
Probabilistic Cuing in Large-Scale Environmental Search
ERIC Educational Resources Information Center
Smith, Alastair D.; Hood, Bruce M.; Gilchrist, Iain D.
2010-01-01
Finding an object in our environment is an important human ability that also represents a critical component of human foraging behavior. One type of information that aids efficient large-scale search is the likelihood of the object being in one location over another. In this study we investigated the conditions under which individuals respond to…
Large-Scale Physical Separation of Depleted Uranium from Soil
2012-09-01
ER D C/ EL T R -1 2 - 2 5 Army Range Technology Program Large-Scale Physical Separation of Depleted Uranium from Soil E nv ir on m en ta l...Separation ................................................................................................................ 2 Project Background...5 2 Materials and Methods
Lessons from Large-Scale Renewable Energy Integration Studies: Preprint
Bird, L.; Milligan, M.
2012-06-01
In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.
Computational Complexity, Efficiency and Accountability in Large Scale Teleprocessing Systems.
1980-12-01
COMPLEXITY, EFFICIENCY AND ACCOUNTABILITY IN LARGE SCALE TELEPROCESSING SYSTEMS DAAG29-78-C-0036 STANFORD UNIVERSITY JOHN T. GILL MARTIN E. BELLMAN...solve but easy to check. Ve have also suggested howy sucb random tapes can be simulated by determin- istically generating "pseudorandom" numbers by a
Large-scale silicon optical switches for optical interconnection
NASA Astrophysics Data System (ADS)
Qiao, Lei; Tang, Weijie; Chu, Tao
2016-11-01
Large-scale optical switches are greatly demanded in building optical interconnections in data centers and high performance computers (HPCs). Silicon optical switches have advantages of being compact and CMOS process compatible, which can be easily monolithically integrated. However, there are difficulties to construct large ports silicon optical switches. One of them is the non-uniformity of the switch units in large scale silicon optical switches, which arises from the fabrication error and causes confusion in finding the unit optimum operation points. In this paper, we proposed a method to detect the optimum operating point in large scale switch with limited build-in power monitors. We also propose methods for improving the unbalanced crosstalk of cross/bar states in silicon electro-optical MZI switches and insertion losses. Our recent progress in large scale silicon optical switches, including 64 × 64 thermal-optical and 32 × 32 electro-optical switches will be introduced. To the best our knowledge, both of them are the largest scale silicon optical switches in their sections, respectively. The switches were fabricated on 340-nm SOI substrates with CMOS 180- nm processes. The crosstalk of the 32 × 32 electro-optic switch was -19.2dB to -25.1 dB, while the value of the 64 × 64 thermal-optic switch was -30 dB to -48.3 dB.
The large scale microwave background anisotropy in decaying particle cosmology
Panek, M.
1987-06-01
We investigate the large-scale anisotropy of the microwave background radiation in cosmological models with decaying particles. The observed value of the quadrupole moment combined with other constraints gives an upper limit on the redshift of the decay z/sub d/ < 3-5. 12 refs., 2 figs.
DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS
The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...
Large Scale Survey Data in Career Development Research
ERIC Educational Resources Information Center
Diemer, Matthew A.
2008-01-01
Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…
The Large-Scale Structure of Scientific Method
ERIC Educational Resources Information Center
Kosso, Peter
2009-01-01
The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…
Ecosystem resilience despite large-scale altered hydro climatic conditions
Technology Transfer Automated Retrieval System (TEKTRAN)
Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...
International Large-Scale Assessments: What Uses, What Consequences?
ERIC Educational Resources Information Center
Johansson, Stefan
2016-01-01
Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
Large scale structure of the sun's radio corona
NASA Technical Reports Server (NTRS)
Kundu, M. R.
1986-01-01
Results of studies of large scale structures of the corona at long radio wavelengths are presented, using data obtained with the multifrequency radioheliograph of the Clark Lake Radio Observatory. It is shown that features corresponding to coronal streamers and coronal holes are readily apparent in the Clark Lake maps.
Large-scale screening by the automated Wassermann reaction
Wagstaff, W.; Firth, R.; Booth, J. R.; Bowley, C. C.
1969-01-01
In view of the drawbacks in the use of the Kahn test for large-scale screening of blood donors, mainly those of human error through work overload and fatiguability, an attempt was made to adapt an existing automated complement-fixation technique for this purpose. This paper reports the successful results of that adaptation. PMID:5776559
Large-scale societal changes and intentionality - an uneasy marriage.
Bodor, Péter; Fokas, Nikos
2014-08-01
Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable.
NASA Astrophysics Data System (ADS)
Onishchenko, O. G.; Pokhotelov, O. A.; Astafieva, N. M.
2008-06-01
The review deals with a theoretical description of the generation of zonal winds and vortices in a turbulent barotropic atmosphere. These large-scale structures largely determine the dynamics and transport processes in planetary atmospheres. The role of nonlinear effects on the formation of mesoscale vortical structures (cyclones and anticyclones) is examined. A new mechanism for zonal wind generation in planetary atmospheres is discussed. It is based on the parametric generation of convective cells by finite-amplitude Rossby waves. Weakly turbulent spectra of Rossby waves are considered. The theoretical results are compared to the results of satellite microwave monitoring of the Earth's atmosphere.
The effect of background turbulence on the propagation of large-scale flames
NASA Astrophysics Data System (ADS)
Matalon, Moshe
2008-12-01
This paper is based on an invited presentation at the Conference on Turbulent Mixing and Beyond held in the Abdus Salam International Center for Theoretical Physics, Trieste, Italy (August 2007). It consists of a summary of recent investigations aimed at understanding the nature and consequences of the Darrieus-Landau instability that is prominent in premixed combustion. It describes rigorous asymptotic methodologies used to simplify the propagation problem of multi-dimensional and time-dependent premixed flames in order to understand the nonlinear evolution of hydrodynamically unstable flames. In particular, it addresses the effect of background turbulent noise on the structure and propagation of large-scale flames.
NASA Technical Reports Server (NTRS)
Liu, J. T. C.
1986-01-01
Advances in the mechanics of boundary layer flow are reported. The physical problems of large scale coherent structures in real, developing free turbulent shear flows, from the nonlinear aspects of hydrodynamic stability are addressed. The presence of fine grained turbulence in the problem, and its absence, lacks a small parameter. The problem is presented on the basis of conservation principles, which are the dynamics of the problem directed towards extracting the most physical information, however, it is emphasized that it must also involve approximations.
Remarks on discrete and continuous large-scale models of DNA dynamics.
Klapper, I; Qian, H
1998-01-01
We present a comparison of the continuous versus discrete models of large-scale DNA conformation, focusing on issues of relevance to molecular dynamics. Starting from conventional expressions for elastic potential energy, we derive elastic dynamic equations in terms of Cartesian coordinates of the helical axis curve, together with a twist function representing the helical or excess twist. It is noted that the conventional potential energies for the two models are not consistent. In addition, we derive expressions for random Brownian forcing for the nonlinear elastic dynamics and discuss the nature of such forces in a continuous system. PMID:9591677
Analysis and design of robust decentralized controllers for nonlinear systems
Schoenwald, D.A.
1993-07-01
Decentralized control strategies for nonlinear systems are achieved via feedback linearization techniques. New results on optimization and parameter robustness of non-linear systems are also developed. In addition, parametric uncertainty in large-scale systems is handled by sensitivity analysis and optimal control methods in a completely decentralized framework. This idea is applied to alleviate uncertainty in friction parameters for the gimbal joints on Space Station Freedom. As an example of decentralized nonlinear control, singular perturbation methods and distributed vibration damping are merged into a control strategy for a two-link flexible manipulator.
NASA Astrophysics Data System (ADS)
Shoemaker, Christine; Wan, Ying
2016-04-01
Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.
Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C
2016-02-15
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D-, A- and E-optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D-optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations
Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.
2015-01-01
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279
TOPOLOGY OF A LARGE-SCALE STRUCTURE AS A TEST OF MODIFIED GRAVITY
Wang Xin; Chen Xuelei; Park, Changbom
2012-03-01
The genus of the isodensity contours is a robust measure of the topology of a large-scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias, and redshift-space distortion. We show that the growth of density fluctuations is scale dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the isodensity contours, an intrinsic measure of the topology of the large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations grow at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures grow homologously, so we expect that the genus-smoothing-scale relation is basically time independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing-scale relation should change over time. This can be used to test the gravity models with large-scale structure observations. We study the cases of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21 cm radio surveys in the near future.
On the renormalization of the effective field theory of large scale structures
Pajer, Enrico; Zaldarriaga, Matias E-mail: matiasz@ias.edu
2013-08-01
Standard perturbation theory (SPT) for large-scale matter inhomogeneities is unsatisfactory for at least three reasons: there is no clear expansion parameter since the density contrast is not small on all scales; it does not fully account for deviations at large scales from a perfect pressureless fluid induced by short-scale non-linearities; for generic initial conditions, loop corrections are UV-divergent, making predictions cutoff dependent and hence unphysical. The Effective Field Theory of Large Scale Structures successfully addresses all three issues. Here we focus on the third one and show explicitly that the terms induced by integrating out short scales, neglected in SPT, have exactly the right scale dependence to cancel all UV-divergences at one loop, and this should hold at all loops. A particularly clear example is an Einstein deSitter universe with no-scale initial conditions P{sub in} ∼ k{sup n}. After renormalizing the theory, we use self-similarity to derive a very simple result for the final power spectrum for any n, excluding two-loop corrections and higher. We show how the relative importance of different corrections depends on n. For n ∼ −1.5, relevant for our universe, pressure and dissipative corrections are more important than the two-loop corrections.
Optimal nonlinear excitation of decadal variability of the North Atlantic thermohaline circulation
NASA Astrophysics Data System (ADS)
Zu, Ziqing; Mu, Mu; Dijkstra, Henk A.
2013-11-01
Nonlinear development of salinity perturbations in the Atlantic thermohaline circulation (THC) is investigated with a three-dimensional ocean circulation model, using the conditional nonlinear optimal perturbation method. The results show two types of optimal initial perturbations of sea surface salinity, one associated with freshwater and the other with salinity. Both types of perturbations excite decadal variability of the THC. Under the same amplitude of initial perturbation, the decadal variation induced by the freshwater perturbation is much stronger than that by the salinity perturbation, suggesting that the THC is more sensitive to freshwater than salinity perturbation. As the amplitude of initial perturbation increases, the decadal variations become stronger for both perturbations. For salinity perturbations, recovery time of the THC to return to steady state gradually saturates with increasing amplitude, whereas this recovery time increases remarkably for freshwater perturbations. A nonlinear (advective) feedback between density and velocity anomalies is proposed to explain these characteristics of decadal variability excitation. The results are consistent with previous ones from simple box models, and highlight the importance of nonlinear feedback in decadal THC variability.
Discrete approximations to optimal trajectories using direct transcription and nonlinear programming
NASA Technical Reports Server (NTRS)
Enright, Paul J.; Conway, Bruce A.
1990-01-01
A recently developed method for solving optimal trajectory problems uses a piecewise-polynomial representation of the state and control variables, enforces the equations of motion via a collocation procedure, and thus approximates the original calculus-of-variations problem with a nonlinear-programming problem, which is solved numerically. This paper identifies this method as a direct transcription method and proceeds to investigate the relationship between the original optimal-control problem and the nonlinear-programming problem. The discretized adjoint equation of the collocation method is found to have deficient accuracy, and an alternate scheme which discretizes the equations of motion using an explicit Runge-Kutta parallel-shooting approach is developed. Both methods are applied to finite-thrust spacecraft trajectory problems, including a low-thrust escape spiral, a three-burn rendezvous, and a low-thrust transfer to the moon.
A non-linear camera calibration with modified teaching-learning-based optimization algorithm
NASA Astrophysics Data System (ADS)
Zhang, Buyang; Yang, Hua; Yang, Shuo
2015-12-01
In this paper, we put forward a novel approach based on hierarchical teaching-and-learning-based optimization (HTLBO) algorithm for nonlinear camera calibration. This algorithm simulates the teaching-learning ability of teachers and learners of a classroom. Different from traditional calibration approach, the proposed technique can find the nearoptimal solution without the need of accurate initial parameters estimation (with only very loose parameter bounds). With the introduction of cascade of teaching, the convergence speed is rapid and the global search ability is improved. Results from our study demonstrate the excellent performance of the proposed technique in terms of convergence, accuracy, and robustness. The HTLBO can also be used to solve many other complex non-linear calibration optimization problems for its good portability.
Discrete approximations to optimal trajectories using direct transcription and nonlinear programming
NASA Astrophysics Data System (ADS)
Enright, Paul J.; Conway, Bruce A.
A recently developed method for solving optimal trajectory problems uses a piecewise-polynomial representation of the state and control variables, enforces the equations of motion via a collocation procedure, and thus approximates the original calculus-of-variations problem with a nonlinear-programming problem, which is solved numerically. This paper identifies this method as a direct transcription method and proceeds to investigate the relationship between the original optimal-control problem and the nonlinear-programming problem. The discretized adjoint equation of the collocation method is found to have deficient accuracy, and an alternate scheme which discretizes the equations of motion using an explicit Runge-Kutta parallel-shooting approach is developed. Both methods are applied to finite-thrust spacecraft trajectory problems, including a low-thrust escape spiral, a three-burn rendezvous, and a low-thrust transfer to the moon.
An Adaptive Multiscale Finite Element Method for Large Scale Simulations
2015-09-28
non-linear iterative solution process. Non-linear fine scale solutions are embedded in the global scale using the partition of unity framework of the...process. Non-linear fine-scale solutions are embedded in the global scale using the partition of unity framework of the Generalized FEM. Damage information...iterative solution process. Non-linear fine-scale solutions are embedded in the global scale using the partition of unity framework of the
The solution of singular optimal control problems using direct collocation and nonlinear programming
NASA Astrophysics Data System (ADS)
Downey, James R.; Conway, Bruce A.
1992-08-01
This paper describes work on the determination of optimal rocket trajectories which may include singular arcs. In recent years direct collocation and nonlinear programming has proven to be a powerful method for solving optimal control problems. Difficulties in the application of this method can occur if the problem is singular. Techniques exist for solving singular problems indirectly using the associated adjoint formulation. Unfortunately, the adjoints are not a part of the direct formulation. It is shown how adjoint information can be obtained from the direct method to allow the solution of singular problems.
On Managing the Use of Surrogates in General Nonlinear Optimization and MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.
1998-01-01
This paper is concerned with a trust region approximation management framework (AMF) for solving the nonlinear programming problem in general and multidisciplinary optimization problems in particular The intent of the AMF methodology is to facilitate the solution of optimization problems with high-fidelity models. While such models are designed to approximate the physical phenomena they describe to a high degree of accuracy, their use in a repetitive procedure, for example, iterations of an optimization or a search algorithm, make such use prohibitively expensive. An improvement in design with lower-fidelity, cheaper models, however, does not guarantee a corresponding improvement for the higher-fidelity problem. The AMF methodology proposed here is based on a class of multilevel methods for constrained optimization and is designed to manage the use of variable-fidelity approximations or models in a systematic way that assures convergence to critical points of the original high-fidelity problem.
A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.
Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan
2015-06-01
Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology.
Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.
Duarte, Belmiro P M; Wong, Weng Kee
2015-08-01
This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.
Optimal control for nonlinear dynamical system of microbial fed-batch culture
NASA Astrophysics Data System (ADS)
Liu, Chongyang
2009-10-01
In fed-batch culture of glycerol bio-dissimilation to 1, 3-propanediol (1, 3-PD), the aim of adding glycerol is to obtain as much 1, 3-PD as possible. So a proper feeding rate is required during the process. Taking the concentration of 1, 3-PD at the terminal time as the performance index and the feeding rate of glycerol as the control function, we propose an optimal control model subject to a nonlinear dynamical system and constraints of continuous state and non-stationary control. A computational approach is constructed to seek the solution of the above model in two aspects. On the one hand we transcribe the optimal control model into an unconstrained one based on the penalty functions and an extension of the state space; on the other hand, by approximating the control function with simple functions, we transform the unconstrained optimal control problem into a sequence of nonlinear programming problems, which can be solved using gradient-based optimization techniques. The convergence analysis of this approximation is also investigated. Numerical results show that, by employing the optimal control policy, the concentration of 1, 3-PD at the terminal time can be increased considerably.
Nonlinear stabilization for a class of time delay systems via inverse optimality approach.
Ordaz, Patricio; Santos-Sánchez, Omar-Jacobo; Rodríguez-Guerrero, Liliam; González-Facundo, Alberto
2017-03-01
This paper is devoted to obtain a stabilizing optimal nonlinear controller based on the well known Control Lyapunov-Krasovskii Functional (CLKF) approach, aimed to solve the inverse optimality problem for a class of nonlinear time delay systems. To determine sufficient conditions for the Bellman's equation solution of the system under consideration, the CLKF and the inverse optimality approach are considered in this paper. In comparison with previous results, this scheme allows us to obtain less conservative controllers, implying energy saving (in terms of average power consumption for a specific thermo-electrical process). Sufficient delay-independent criteria in terms of CLKF is obtained such that the closed-loop nonlinear time-delay system is guaranteed to be local Asymptotically Stable. To illustrate the effectiveness of the theoretical results, a comparative study with an industrial PID controller tuned by the Ziegler-Nichols methodology (Z-N) and a Robust-PID tuned by using the D-partition method is presented by online experimental tests for an atmospheric drying process with time delay in its dynamics.
NASA Astrophysics Data System (ADS)
Hocker, David Lance
The control of quantum systems occurs across a broad range of length and energy scales in modern science, and efforts have demonstrated that locating suitable controls to perform a range of objectives has been widely successful. The justification for this success arises from a favorable topology of a quantum control landscape, defined as a mapping of the controls to a cost function measuring the success of the operation. This is summarized in the landscape principle that no suboptimal extrema exist on the landscape for well-suited control problems, explaining a trend of successful optimizations in both theory and experiment. This dissertation explores what additional lessons may be gleaned from the quantum control landscape through numerical and theoretical studies. The first topic examines the experimentally relevant problem of assessing and reducing disturbances due to noise. The local curvature of the landscape is found to play an important role on noise effects in the control of targeted quantum unitary operations, and provides a conceptual framework for assessing robustness to noise. Software for assessing noise effects in quantum computing architectures was also developed and applied to survey the performance of current quantum control techniques for quantum computing. A lack of competition between robustness and perfect unitary control operation was discovered to fundamentally limit noise effects, and highlights a renewed focus upon system engineering for reducing noise. This convergent behavior generally arises for any secondary objective in the situation of high primary objective fidelity. The other dissertation topic examines the utility of quantum control for a class of nonlinear Hamiltonians not previously considered under the landscape principle. Nonlinear Schrodinger equations are commonly used to model the dynamics of Bose-Einstein condensates (BECs), one of the largest known quantum objects. Optimizations of BEC dynamics were performed in which the
Cost Distribution of Environmental Flow Demands in a Large Scale Multi-Reservoir System
NASA Astrophysics Data System (ADS)
Marques, G.; Tilmant, A.
2014-12-01
This paper investigates the recovery of a prescribed flow regime through reservoir system reoperation, focusing on the associated costs and losses imposed on different power plants depending on flows, power plant and reservoir characteristics and systems topology. In large-scale reservoir systems such cost distribution is not trivial, and it should be properly evaluated to identify coordinated operating solutions that avoid penalizing a single reservoir. The methods combine an efficient stochastic dual dynamic programming algorithm for reservoir optimization subject to environmental flow targets with specific magnitude, duration and return period, which effects on fish recruitment are already known. Results indicate that the distribution of the effect of meeting the environmental flow demands throughout the reservoir cascade differs largely, and in some reservoirs power production and revenue are increased, while in others it is reduced. Most importantly, for the example system modeled here (10 reservoirs in the Parana River basin, Brazil) meeting the target environmental flows was possible without reducing the total energy produced in the year, at a cost of $25 Million/year in foregone hydropower revenues (3% reduction). Finally, the results and methods are useful in (a) quantifying the foregone hydropower and revenues resulting from meeting a specific environmental flow demand, (b) identifying the distribution and reallocation of the foregone hydropower and revenue across a large scale system, and (c) identifying optimal reservoir operating strategies to meet environmental flow demands in a large scale multi-reservoir system.
Consistency relations for large-scale structures with primordial non-Gaussianities
NASA Astrophysics Data System (ADS)
Valageas, Patrick; Taruya, Atsushi; Nishimichi, Takahiro
2017-01-01
We investigate how the consistency relations of large-scale structures are modified when the initial density field is not Gaussian. We consider both scenarios where the primordial density field can be written as a nonlinear functional of a Gaussian field and more general scenarios where the probability distribution of the primordial density field can be expanded around the Gaussian distribution, up to all orders over δL 0. Working at linear order over the non-Gaussianity parameters fNL(n ) or Sn, we find that the consistency relations for the matter density fields are modified as they include additional contributions that involve all-order mixed linear-nonlinear correlations ⟨∏δL∏δ ⟩. We derive the conditions needed to recover the simple Gaussian form of the consistency relations. This corresponds to scenarios that become Gaussian in the squeezed limit. Our results also apply to biased tracers and velocity or momentum cross-correlations.
Automating large-scale power plant systems: a perspective and philosophy
Kisner, R A; Raju, G V.S.
1984-12-01
This report is intended to convey a philosophy for the design of large-scale control systems that will guide control engineers and managers in the development of integrated, intelligent, flexible control systems. A liquid reactor, the large-scale protoype breeder, is the focus of the examples and analyses in the report. A structure for the discontinuous and continuous control aspects is presented in sufficient detail to form the foundation for future expanded development. The system diagramming techniques used are especially useful because they are both an aid to control design and a specification for software design. This report develops a continuous-system supervisory controlled that adds the capability for optimal coordination and control to existing supervisory control design. This development makes global minimization of variations in key system parameters during transients.
NASA Astrophysics Data System (ADS)
Xu, Yanling; Lin, Qiuhong; Wang, Xingze; Li, Lin; Cong, Qiang; Pan, Bo
2017-01-01
The deployable structure is critical to the overall success of the space mission. This paper introduces a large-scale spatial deployable structure (SDS), which is developed to deploy and support the payload panels in a precise configuration once on the track. And segmental researching in the design, kinematics and dynamics analysis of SDS's prototyping system are presented. Geometric construction method and Bar-groups method are adopted to analysis the dimensions and coordinates of the SDS, which finally construct an well-determined mathematical model to raise the productivity and efficiency during optimization and analysis work. Be reasoned with the large-scale of the truss structures, flexible multibody dynamic simulations are developed, which present much more authentic stress transfer and kinematics behaviors. According to the deployment experiments of SDS's prototyping system, the correctness and validity of the flexible multibody simulation work are well proved.
Practical large scale synthesis of half-esters of malonic acid.
Niwayama, Satomi; Cho, Hanjoung
2009-05-01
A practical large-scale synthesis of monomethyl malonate and monoethyl malonate, which are among the most commonly applied half-esters in organic synthesis, is described, applying the highly efficient selective monohydrolysis of symmetric diesters we reported before. The optimal conditions with regard to the type of base, equivalent, co-solvents, and the reaction time have been examined for large-scale reactions. Monomethyl malonate and monoethyl malonate were obtained in high yields with near 100% purity within only half a day. The conditions of this selective monohydrolysis reaction are environmentally benign and straightforward, as it requires only water, a small proportion of a volatile co-solvent, and inexpensive reagents, and produces no hazardous by-products, and therefore the synthetic utility of this reaction in process chemistry is expected.
Combining Nearest Neighbor Search with Tabu Search for Large-Scale Vehicle Routing Problem
NASA Astrophysics Data System (ADS)
Du, Lingling; He, Ruhan
The vehicle routing problem is a classical problem in operations research, where the objective is to design least cost routes for a fleet of identical capacitated vehicles to service geographically scattered customers. In this paper, we present a new and effective hybrid metaheuristic algorithm for large-scale vehicle routing problem. The algorithm combines the strengths of the well-known Nearest Neighbor Search and Tabu Search into a two-stage procedure. More precisely, Nearest Neighbor Search is used to construct initial routes in the first stage and the Tabu Search is utilized to optimize the intra-route and the inter-route in the second stage. The presented algorithm is specifically designed for large-scale problems. The computational experiments were carried out on a standard benchmark and a real dataset with 6772 tobacco customers. The results demonstrate that the suggested method is highly competitive.
The Large Scale Synthesis of Aligned Plate Nanostructures
NASA Astrophysics Data System (ADS)
Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli
2016-07-01
We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.
Large-scale linear nonparallel support vector machine solver.
Tian, Yingjie; Ping, Yuan
2014-02-01
Twin support vector machines (TWSVMs), as the representative nonparallel hyperplane classifiers, have shown the effectiveness over standard SVMs from some aspects. However, they still have some serious defects restricting their further study and real applications: (1) They have to compute and store the inverse matrices before training, it is intractable for many applications where data appear with a huge number of instances as well as features; (2) TWSVMs lost the sparseness by using a quadratic loss function making the proximal hyperplane close enough to the class itself. This paper proposes a Sparse Linear Nonparallel Support Vector Machine, termed as L1-NPSVM, to deal with large-scale data based on an efficient solver-dual coordinate descent (DCD) method. Both theoretical analysis and experiments indicate that our method is not only suitable for large scale problems, but also performs as good as TWSVMs and SVMs.
Prototype Vector Machine for Large Scale Semi-Supervised Learning
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.
Electron drift in a large scale solid xenon
Yoo, J.; Jaskierny, W. F.
2015-08-21
A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.
NASA Technical Reports Server (NTRS)
Swanson, Gregory T.; Cassell, Alan M.
2011-01-01
Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.
GAIA: A WINDOW TO LARGE-SCALE MOTIONS
Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it
2012-08-10
Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.
The Large Scale Synthesis of Aligned Plate Nanostructures
Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli
2016-01-01
We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672
Long gradient mode and large-scale structure observables
NASA Astrophysics Data System (ADS)
Allahyari, Alireza; Firouzjaee, Javad T.
2017-03-01
We extend the study of long-mode perturbations to other large-scale observables such as cosmic rulers, galaxy-number counts, and halo bias. The long mode is a pure gradient mode that is still outside an observer's horizon. We insist that gradient-mode effects on observables vanish. It is also crucial that the expressions for observables are relativistic. This allows us to show that the effects of a gradient mode on the large-scale observables vanish identically in a relativistic framework. To study the potential modulation effect of the gradient mode on halo bias, we derive a consistency condition to the first order in gradient expansion. We find that the matter variance at a fixed physical scale is not modulated by the long gradient mode perturbations when the consistency condition holds. This shows that the contribution of long gradient modes to bias vanishes in this framework.
Large Scale Deformation of the Western US Cordillera
NASA Technical Reports Server (NTRS)
Bennett, Richard A.
2001-01-01
Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.
Electron drift in a large scale solid xenon
Yoo, J.; Jaskierny, W. F.
2015-08-21
A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less
LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER
Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.
2012-10-01
By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.
The CLASSgal code for relativistic cosmological large scale structure
Dio, Enea Di; Montanari, Francesco; Durrer, Ruth; Lesgourgues, Julien E-mail: Francesco.Montanari@unige.ch E-mail: Ruth.Durrer@unige.ch
2013-11-01
We present accurate and efficient computations of large scale structure observables, obtained with a modified version of the CLASS code which is made publicly available. This code includes all relativistic corrections and computes both the power spectrum C{sub ℓ}(z{sub 1},z{sub 2}) and the corresponding correlation function ξ(θ,z{sub 1},z{sub 2}) of the matter density and the galaxy number fluctuations in linear perturbation theory. For Gaussian initial perturbations, these quantities contain the full information encoded in the large scale matter distribution at the level of linear perturbation theory. We illustrate the usefulness of our code for cosmological parameter estimation through a few simple examples.
A Cloud Computing Platform for Large-Scale Forensic Computing
NASA Astrophysics Data System (ADS)
Roussev, Vassil; Wang, Liqiang; Richard, Golden; Marziale, Lodovico
The timely processing of massive digital forensic collections demands the use of large-scale distributed computing resources and the flexibility to customize the processing performed on the collections. This paper describes MPI MapReduce (MMR), an open implementation of the MapReduce processing model that outperforms traditional forensic computing techniques. MMR provides linear scaling for CPU-intensive processing and super-linear scaling for indexing-related workloads.
Large-Scale Weather Disturbances in Mars’ Southern Extratropics
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, Melinda A.
2015-11-01
Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre