Polynomial interior-point algorithms for horizontal linear complementarity problem
NASA Astrophysics Data System (ADS)
Wang, G. Q.; Bai, Y. Q.
2009-11-01
In this paper a class of polynomial interior-point algorithms for horizontal linear complementarity problem based on a new parametric kernel function, with parameters p[set membership, variant][0,1] and [sigma]>=1, are presented. The proposed parametric kernel function is not exponentially convex and also not strongly convex like the usual kernel functions, and has a finite value at the boundary of the feasible region. It is used both for determining the search directions and for measuring the distance between the given iterate and the [mu]-center for the algorithm. The currently best known iteration bounds for the algorithm with large- and small-update methods are derived, namely, and , respectively, which reduce the gap between the practical behavior of the algorithms and their theoretical performance results. Numerical tests demonstrate the behavior of the algorithms for different results of the parameters p,[sigma] and [theta].
The treatment of contact problems as a non-linear complementarity problem
Bjorkman, G.
1994-12-31
Contact and friction problems are of great importance in many engineering applications, for example in ball bearings, bolted joints, metal forming and also car crashes. In these problems the behavior on the contact surface has a great influence on the overall behavior of the structure. Often problems such as wear and initiation of cracks occur on the contact surface. Contact problems are often described using complementarity conditions, w {>=} 0, p {>=} 0, w{sup T}p = 0, which for example represents the following behavior: (i) two bodies can not penetrate each other, i.e. the gap must be greater than or equal to zero, (ii) the contact pressure is positive and different from zero only if the two bodies are in contact with each other. Here it is shown that by using the theory of non-linear complementarity problems the unilateral behavior of the problem can be treated in a straightforward way. It is shown how solution methods for discretized frictionless contact problem can be formulated. By formulating the problem either as a generalized equation or as a B-differentiable function, it is pointed out how Newton`s method may be extended to contact problems. Also an algorithm for tracing the equilibrium path of frictionless contact problems is described. It is shown that, in addition to the {open_quotes}classical{close_quotes} bifurcation and limit points, there can be points where the equilibrium path has reached an end point or points where bifurcation is possible even if the stiffness matrix is non-singular.
New Existence Conditions for Order Complementarity Problems
NASA Astrophysics Data System (ADS)
Németh, S. Z.
2009-09-01
Complementarity problems are mathematical models of problems in economics, engineering and physics. A special class of complementarity problems are the order complementarity problems [2]. Order complementarity problems can be applied in lubrication theory [6] and economics [1]. The notion of exceptional family of elements for general order complementarity problems in Banach spaces will be introduced. It will be shown that for general order complementarity problems defined by completely continuous fields the problem has either a solution or an exceptional family of elements (for other notions of exceptional family of elements see [1, 2, 3, 4] and the related references therein). This solves a conjecture of [2] about the existence of exceptional family of elements for order complementarity problems. The proof can be done by using the Leray-Schauder alternative [5]. An application to integral operators will be given.
A path-following interior-point algorithm for linear and quadratic problems
Wright, S.J.
1993-12-01
We describe an algorithm for the monotone linear complementarity problem that converges for many positive, not necessarily feasible, starting point and exhibits polynomial complexity if some additional assumptions are made on the starting point. If the problem has a strictly complementary solution, the method converges subquadratically. We show that the algorithm and its convergence extend readily to the mixed monotone linear complementarity problem and, hence, to all the usual formulations of the linear programming and convex quadratic programming problems.
The fully actuated traffic control problem solved by global optimization and complementarity
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria
2016-02-01
Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.
NASA Astrophysics Data System (ADS)
Skerget, P.; Brebbia, C. A.
In many practical applications of boundary elements, the potential problems may be nonlinear. The use of Kirchoff's transform provides an approach to convert a nonlinear material problem into a linear one. A description of several different shape functions to define the conductivity is presented. Attention is given to the type of integral equations which are obtained if the Kirchoff's transform is applied for nonlinear material in the presence of mixed boundary conditions. The integral formulation for nonlinear radiation boundary conditions with and without potential dependent conductivity is also considered. For steady heat conduction problems with constant conductivity a boundary integral equation relating boundary values for temperatures (or potentials) and its normal derivatives over the boundary can be obtained. Applications which concern the solution of steady state conduction problems are investigated. The problems are related to a hollow cylinder, a nuclear reactor pressure vessel, and an industrial furnace.
Linearization problem in pseudolite surveys
NASA Astrophysics Data System (ADS)
Cellmer, Slawomir; Rapinski, Jacek
2010-06-01
GPS augmented with pseudolites (PL), can be used in various engineering surveys. Also pseudolite—only navigation system can be designed and used in any place, even if GPS signal is not available (Kee et al. Development of indoor navigation system using asynchronous pseudolites, 1038-1045, 2000). Especially in engineering surveys, where harsh survey environment is common, pseudolites have a lot of applications. Pseudolites may be used in construction sites, open pit mines, city canyons, GPS and PL baseline processing is similar, although there are few differences that must be taken into account. One of the major issues is linearization problem. The source of the problem is neglecting second terms of Taylor series expansion in GPS baseline processing software. This problem occurs when the pseudolite is relatively close to the receiver, which is the case in PL surveys. In this paper authors presents the algorithm for GPS + PL data processing including, neglected in classical GPS only approach, second terms of Taylor series expansion. The mathematical model of adjustment problem, detailed proposal of application in baseline processing algorithms, and numerical tests are presented.
The linear separability problem: some testing methods.
Elizondo, D
2006-03-01
The notion of linear separability is used widely in machine learning research. Learning algorithms that use this concept to learn include neural networks (single layer perceptron and recursive deterministic perceptron), and kernel machines (support vector machines). This paper presents an overview of several of the methods for testing linear separability between two classes. The methods are divided into four groups: Those based on linear programming, those based on computational geometry, one based on neural networks, and one based on quadratic programming. The Fisher linear discriminant method is also presented. A section on the quantification of the complexity of classification problems is included. PMID:16566462
The Vertical Linear Fractional Initialization Problem
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
1999-01-01
This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.
Random generation of structured linear optimization problems
Arthur, J.; Frendewey, J. Jr.
1994-12-31
We describe the on-going development of a random generator for linear optimization problems (LPs) founded on the concept of block structure. The general LP: minimize z = cx subject to Ax = b, x {ge} 0 can take a variety of special forms determined (primarily) by predefined structures on the matrix A of constraint coefficients. The authors have developed several random problem generators which provide instances of LPs having such structure; in particular (i) general (non-structured) problems, (ii) generalized upper bound (GUB) constraints, (iii) minimum cost network flow problems, (iv) transportation and assignment problems, (v) shortest path problems, (vi) generalized network flow problems, and (vii) multicommodity network flow problems. This paper discusses the general philosophy behind the construction of these generators. In addition, the task of combining the generators into a single generator -- in which the matrix A can contain various blocks, each of a prescribed structure from those mentioned above -- is described.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1990-01-01
This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.
Linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1980-01-01
Problem involves design of controls for linear time-invariant system disturbed by white noise. Solution is Kalman filter coupled through set of optimal regulator gains to produce desired control signal. Key to solution is solving matrix Riccati differential equation. LSOCE effectively solves problem for wide range of practical applications. Program is written in FORTRAN IV for batch execution and has been implemented on IBM 360.
Numerical linear algebra for reconstruction inverse problems
NASA Astrophysics Data System (ADS)
Nachaoui, Abdeljalil
2004-01-01
Our goal in this paper is to discuss various issues we have encountered in trying to find and implement efficient solvers for a boundary integral equation (BIE) formulation of an iterative method for solving a reconstruction problem. We survey some methods from numerical linear algebra, which are relevant for the solution of this class of inverse problems. We motivate the use of our constructing algorithm, discuss its implementation and mention the use of preconditioned Krylov methods.
The linear regulator problem for parabolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kunisch, K.
1983-01-01
An approximation framework is presented for computation (in finite imensional spaces) of Riccati operators that can be guaranteed to converge to the Riccati operator in feedback controls for abstract evolution systems in a Hilbert space. It is shown how these results may be used in the linear optimal regulator problem for a large class of parabolic systems.
Higher order sensitivity of solutions to convex programming problems without strict complementarity
NASA Technical Reports Server (NTRS)
Malanowski, Kazimierz
1988-01-01
Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.
Numerical stability in problems of linear algebra.
NASA Technical Reports Server (NTRS)
Babuska, I.
1972-01-01
Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.
NASA Astrophysics Data System (ADS)
Finkelstein, David; Finkelstein, Shlomit Ritz
1983-08-01
Interactivity generates paradox in that the interactive control by one system C of predicates about another system-under-study S may falsify these predicates. We formulate an “interactive logic” to resolve this paradox of interactivity. Our construction generalizes one, the Galois connection, used by Von Neumann for the similar quantum paradox. We apply the construction to a transition system, a concept that includes general systems, automata, and quantum systems. In some (classical) automata S, the interactive predicates about S show quantumlike complementarity arising from interactivity: The interactive paradox generates the quantum paradox. Some classical S's have noncommutative algebras of interactively observable coordinates similar to the Heisenberg algebra of a quantum system. Such S's are “hidden variable” models of quantum theory not covered by the hidden variable studies of Von Neumann, Bohm, Bell, or Kochen and Specker. It is conceivable that some quantum effects in Nature arise from interactivity.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
A multistage linear array assignment problem
NASA Technical Reports Server (NTRS)
Nicol, David M.; Shier, D. R.; Kincaid, R. K.; Richards, D. S.
1988-01-01
The implementation of certain algorithms on parallel processing computing architectures can involve partitioning contiguous elements into a fixed number of groups, each of which is to be handled by a single processor. It is desired to find an assignment of elements to processors that minimizes the sum of the maximum workloads experienced at each stage. This problem can be viewed as a multi-objective network optimization problem. Polynomially-bounded algorithms are developed for the case of two stages, whereas the associated decision problem (for an arbitrary number of stages) is shown to be NP-complete. Heuristic procedures are therefore proposed and analyzed for the general problem. Computational experience with one of the exact problems, incorporating certain pruning rules, is presented with one of the exact problems. Empirical results also demonstrate that one of the heuristic procedures is especially effective in practice.
An amoeboid algorithm for solving linear transportation problem
NASA Astrophysics Data System (ADS)
Gao, Cai; Yan, Chao; Zhang, Zili; Hu, Yong; Mahadevan, Sankaran; Deng, Yong
2014-03-01
Transportation Problem (TP) is one of the basic operational research problems, which plays an important role in many practical applications. In this paper, a bio-inspired mathematical model is proposed to handle the Linear Transportation Problem (LTP) in directed networks by modifying the original amoeba model Physarum Solver. Several examples are used to prove that the provided model can effectively solve Balanced Transportation Problem (BTP), Unbalanced Transportation Problem (UTP), especially the Generalized Transportation Problem (GTP), in a nondiscrete way.
Singular linear-quadratic control problem for systems with linear delay
Sesekin, A. N.
2013-12-18
A singular linear-quadratic optimization problem on the trajectories of non-autonomous linear differential equations with linear delay is considered. The peculiarity of this problem is the fact that this problem has no solution in the class of integrable controls. To ensure the existence of solutions is required to expand the class of controls including controls with impulse components. Dynamical systems with linear delay are used to describe the motion of pantograph from the current collector with electric traction, biology, etc. It should be noted that for practical problems fact singularity criterion of quality is quite commonly occurring, and therefore the study of these problems is surely important. For the problem under discussion optimal programming control contained impulse components at the initial and final moments of time is constructed under certain assumptions on the functional and the right side of the control system.
Multisplitting for linear, least squares and nonlinear problems
Renaut, R.
1996-12-31
In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.
Experiences with linear solvers for oil reservoir simulation problems
Joubert, W.; Janardhan, R.; Biswas, D.; Carey, G.
1996-12-31
This talk will focus on practical experiences with iterative linear solver algorithms used in conjunction with Amoco Production Company`s Falcon oil reservoir simulation code. The goal of this study is to determine the best linear solver algorithms for these types of problems. The results of numerical experiments will be presented.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Complementarity, Sets and Numbers
ERIC Educational Resources Information Center
Otte, M.
2003-01-01
Niels Bohr's term "complementarity" has been used by several authors to capture the essential aspects of the cognitive and epistemological development of scientific and mathematical concepts. In this paper we will conceive of complementarity in terms of the dual notions of extension and intension of mathematical terms. A complementarist approach…
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
NASA Astrophysics Data System (ADS)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-01
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α-. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen's method is employed to find a compromise solution, supported by illustrative numerical example.
Finding Optimal Gains In Linear-Quadratic Control Problems
NASA Technical Reports Server (NTRS)
Milman, Mark H.; Scheid, Robert E., Jr.
1990-01-01
Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.
NASA Astrophysics Data System (ADS)
Howard, Don
2013-04-01
Complementarity is Niels Bohr's most original contribution to the interpretation of quantum mechanics, but there is widespread confusion about complementarity in the popular literature and even in some of the serious scholarly literature on Bohr. This talk provides a historically grounded guide to Bohr's own understanding of the doctrine, emphasizing the manner in which complementarity is deeply rooted in the physics of the quantum world, in particular the physics of entanglement, and is, therefore, not just an idiosyncratic philosophical addition. Among the more specific points to be made are that complementarity is not to be confused with wave-particle duality, that it is importantly different from Heisenberg's idea of observer-induced limitations on measurability, and that it is in no way an expression of a positivist philosophical project.
Towards an ideal preconditioner for linearized Navier-Stokes problems
Murphy, M.F.
1996-12-31
Discretizing certain linearizations of the steady-state Navier-Stokes equations gives rise to nonsymmetric linear systems with indefinite symmetric part. We show that for such systems there exists a block diagonal preconditioner which gives convergence in three GMRES steps, independent of the mesh size and viscosity parameter (Reynolds number). While this {open_quotes}ideal{close_quotes} preconditioner is too expensive to be used in practice, it provides a useful insight into the problem. We then consider various approximations to the ideal preconditioner, and describe the eigenvalues of the preconditioned systems. Finally, we compare these preconditioners numerically, and present our conclusions.
Successive linear optimization approach to the dynamic traffic assignment problem
Ho, J.K.
1980-11-01
A dynamic model for the optimal control of traffic flow over a network is considered. The model, which treats congestion explicitly in the flow equations, gives rise to nonlinear, nonconvex mathematical programming problems. It has been shown for a piecewise linear version of this model that a global optimum is contained in the set of optimal solutions of a certain linear program. A sufficient condition for optimality which implies that a global optimum can be obtained by successively optimizing at most N + 1 objective functions for the linear program, where N is the number of time periods in the planning horizon is presented. Computational results are reported to indicate the efficiency of this approach.
Regularized total least squares approach for nonconvolutional linear inverse problems.
Zhu, W; Wang, Y; Galatsanos, N P; Zhang, J
1999-01-01
In this correspondence, a solution is developed for the regularized total least squares (RTLS) estimate in linear inverse problems where the linear operator is nonconvolutional. Our approach is based on a Rayleigh quotient (RQ) formulation of the TLS problem, and we accomplish regularization by modifying the RQ function to enforce a smooth solution. A conjugate gradient algorithm is used to minimize the modified RQ function. As an example, the proposed approach has been applied to the perturbation equation encountered in optical tomography. Simulation results show that this method provides more stable and accurate solutions than the regularized least squares and a previously reported total least squares approach, also based on the RQ formulation. PMID:18267442
An analytically solvable eigenvalue problem for the linear elasticity equations.
Day, David Minot; Romero, Louis Anthony
2004-07-01
Analytic solutions are useful for code verification. Structural vibration codes approximate solutions to the eigenvalue problem for the linear elasticity equations (Navier's equations). Unfortunately the verification method of 'manufactured solutions' does not apply to vibration problems. Verification books (for example [2]) tabulate a few of the lowest modes, but are not useful for computations of large numbers of modes. A closed form solution is presented here for all the eigenvalues and eigenfunctions for a cuboid solid with isotropic material properties. The boundary conditions correspond physically to a greased wall.
A nonlinear complementarity approach for the national energy modeling system
Gabriel, S.A.; Kydes, A.S.
1995-03-08
The National Energy Modeling System (NEMS) is a large-scale mathematical model that computes equilibrium fuel prices and quantities in the U.S. energy sector. At present, to generate these equilibrium values, NEMS sequentially solves a collection of linear programs and nonlinear equations. The NEMS solution procedure then incorporates the solutions of these linear programs and nonlinear equations in a nonlinear Gauss-Seidel approach. The authors describe how the current version of NEMS can be formulated as a particular nonlinear complementarity problem (NCP), thereby possibly avoiding current convergence problems. In addition, they show that the NCP format is equally valid for a more general form of NEMS. They also describe several promising approaches for solving the NCP form of NEMS based on recent Newton type methods for general NCPs. These approaches share the feature of needing to solve their direction-finding subproblems only approximately. Hence, they can effectively exploit the sparsity inherent in the NEMS NCP.
Rees algebras, Monomial Subrings and Linear Optimization Problems
NASA Astrophysics Data System (ADS)
Dupont, Luis A.
2010-06-01
In this thesis we are interested in studying algebraic properties of monomial algebras, that can be linked to combinatorial structures, such as graphs and clutters, and to optimization problems. A goal here is to establish bridges between commutative algebra, combinatorics and optimization. We study the normality and the Gorenstein property-as well as the canonical module and the a-invariant-of Rees algebras and subrings arising from linear optimization problems. In particular, we study algebraic properties of edge ideals and algebras associated to uniform clutters with the max-flow min-cut property or the packing property. We also study algebraic properties of symbolic Rees algebras of edge ideals of graphs, edge ideals of clique clutters of comparability graphs, and Stanley-Reisner rings.
Efficient algorithms for linear dynamic inverse problems with known motion
NASA Astrophysics Data System (ADS)
Hahn, B. N.
2014-03-01
An inverse problem is called dynamic if the object changes during the data acquisition process. This occurs e.g. in medical applications when fast moving organs like the lungs or the heart are imaged. Most regularization methods are based on the assumption that the object is static during the measuring procedure. Hence, their application in the dynamic case often leads to serious motion artefacts in the reconstruction. Therefore, an algorithm has to take into account the temporal changes of the investigated object. In this paper, a reconstruction method that compensates for the motion of the object is derived for dynamic linear inverse problems. The algorithm is validated at numerical examples from computerized tomography.
Some problems in applications of the linear variational method
NASA Astrophysics Data System (ADS)
Pupyshev, Vladimir I.; Montgomery, H. E.
2015-09-01
The linear variational method is a standard computational method in quantum mechanics and quantum chemistry. As taught in most classes, the general guidance is to include as many basis functions as practical in the variational wave function. However, if it is desired to study the patterns of energy change accompanying the change of system parameters such as the shape and strength of the potential energy, the problem becomes more complicated. We use one-dimensional systems with a particle in a rectangular or in a harmonic potential confined in an infinite rectangular box to illustrate situations where a variational calculation can give incorrect results. These situations result when the energy of the lowest eigenvalue is strongly dependent on the parameters that describe the shape and strength of the potential. The numerical examples described in this work are provided as cautionary notes for practitioners of numerical variational calculations.
Using parallel banded linear system solvers in generalized eigenvalue problems
NASA Technical Reports Server (NTRS)
Zhang, Hong; Moss, William F.
1993-01-01
Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.
First integrals for the Kepler problem with linear drag
NASA Astrophysics Data System (ADS)
Margheri, Alessandro; Ortega, Rafael; Rebelo, Carlota
2016-07-01
In this work we consider the Kepler problem with linear drag, and prove the existence of a continuous vector-valued first integral, obtained taking the limit as t→ +∞ of the Runge-Lenz vector. The norm of this first integral can be interpreted as an asymptotic eccentricity e_{∞} with 0≤ e_{∞} ≤ 1 . The orbits satisfying e_{∞} <1 approach the singularity by an elliptic spiral and the corresponding solutions x(t)=r(t)e^{iθ (t)} have a norm r(t) that goes to zero like a negative exponential and an argument θ (t) that goes to infinity like a positive exponential. In particular, the difference between consecutive times of passage through the pericenter, say T_{n+1} -T_n , goes to zero as 1/n.
NASA Astrophysics Data System (ADS)
Bousso, Raphael
2013-06-01
The near-horizon field B of an old black hole is maximally entangled with the early Hawking radiation R, by unitarity of the S-matrix. But B must be maximally entangled with the black hole interior A, by the equivalence principle. Causal patch complementarity fails to reconcile these conflicting requirements. The system B can be probed by a freely falling observer while there is still time to turn around and remain outside the black hole. Therefore, the entangled state of the BR system is dictated by unitarity even in the infalling patch. If, by monogamy of entanglement, B is not entangled with A, the horizon is replaced by a singularity or “firewall.” To illustrate the radical nature of the ideas that are needed, I briefly discuss two approaches for avoiding a firewall: the identification of A with a subsystem of R; and a combination of patch complementarity with the Horowitz-Maldacena final-state proposal.
Application of fractional derivative models in linear viscoelastic problems
NASA Astrophysics Data System (ADS)
Sasso, M.; Palmieri, G.; Amodio, D.
2011-11-01
Appropriate knowledge of viscoelastic properties of polymers and elastomers is of fundamental importance for a correct modelization and analysis of structures where such materials are present, especially when dealing with dynamic and vibration problems. In this paper experimental results of a series of compression and tension tests on specimens of styrene-butadiene rubber and polypropylene plastic are presented; tests consist of creep and relaxation tests, as well as cyclic loading at different frequencies. Experimental data are then used to calibrate some linear viscoelastic models; besides the classical approach based on a combination in series or parallel of standard mechanical elements as springs and dashpots, particular emphasis is given to the application of models whose constitutive equations are based on differential equations of fractional order (Fractional Derivative Model). The two approaches are compared analyzing their capability to reproduce all the experimental data for given materials; also, the main computational issues related with these models are addressed, and the advantage of using a limited number of parameters is demonstrated.
Complementarity and quantum walks
Kendon, Viv; Sanders, Barry C.
2005-02-01
We show that quantum walks interpolate between a coherent 'wave walk' and a random walk depending on how strongly the walker's coin state is measured; i.e., the quantum walk exhibits the quintessentially quantum property of complementarity, which is manifested as a tradeoff between knowledge of which path the walker takes vs the sharpness of the interference pattern. A physical implementation of a quantum walk (the quantum quincunx) should thus have an identifiable walker and the capacity to demonstrate the interpolation between wave walk and random walk depending on the strength of measurement.
The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem
Narayanamoorthy, S.; Kalyani, S.
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713
LP-DIT interchange tool for linear programming problems
Makowski, M.
1994-12-31
LP-DIT is a small library that provides an easy handling of LP problem data between a problem generator, solver and other modules (problem modification, generation of multi-criteria problem, report writers, etc). So far LP-DIT has been implemented with 4 LP (including one MIP) solvers and is being used as a module for model-based Decision Support System. LP-DIT will be released as a public domain soft-ware in the coming weeks.
Multigrid approaches to non-linear diffusion problems on unstructured meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
The efficiency of three multigrid methods for solving highly non-linear diffusion problems on two-dimensional unstructured meshes is examined. The three multigrid methods differ mainly in the manner in which the nonlinearities of the governing equations are handled. These comprise a non-linear full approximation storage (FAS) multigrid method which is used to solve the non-linear equations directly, a linear multigrid method which is used to solve the linear system arising from a Newton linearization of the non-linear system, and a hybrid scheme which is based on a non-linear FAS multigrid scheme, but employs a linear solver on each level as a smoother. Results indicate that all methods are equally effective at converging the non-linear residual in a given number of grid sweeps, but that the linear solver is more efficient in cpu time due to the lower cost of linear versus non-linear grid sweeps.
Problems with the linear q-Fokker Planck equation
NASA Astrophysics Data System (ADS)
Yano, Ryosuke
2015-05-01
In this letter, we discuss the linear q-Fokker Planck equation, whose solution follows Tsallis distribution, from the viewpoint of kinetic theory. Using normal definitions of moments, we can expand the distribution function with infinite moments for 0 ⩽ q < 1, whereas we cannot expand the distribution function with infinite moments for 1 < q owing to emergences of characteristic points in moments. From Grad's 13 moment equations for the linear q-Fokker Planck equation, the dissipation rate of the heat flux via the linear q-Fokker Planck equation diverges at 0 ⩽ q < 2/3. In other words, the thermal conductivity, which defines the heat flux with the spatial gradient of the temperature and the thermal conductivity, which defines the heat flux with the spacial gradient of the density, jumps to zero at q = 2/3, discontinuously.
Fixed Point Problems for Linear Transformations on Pythagorean Triples
ERIC Educational Resources Information Center
Zhan, M.-Q.; Tong, J.-C.; Braza, P.
2006-01-01
In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…
A linear regression solution to the spatial autocorrelation problem
NASA Astrophysics Data System (ADS)
Griffith, Daniel A.
The Moran Coefficient spatial autocorrelation index can be decomposed into orthogonal map pattern components. This decomposition relates it directly to standard linear regression, in which corresponding eigenvectors can be used as predictors. This paper reports comparative results between these linear regressions and their auto-Gaussian counterparts for the following georeferenced data sets: Columbus (Ohio) crime, Ottawa-Hull median family income, Toronto population density, southwest Ohio unemployment, Syracuse pediatric lead poisoning, and Glasgow standard mortality rates, and a small remotely sensed image of the High Peak district. This methodology is extended to auto-logistic and auto-Poisson situations, with selected data analyses including percentage of urban population across Puerto Rico, and the frequency of SIDs cases across North Carolina. These data analytic results suggest that this approach to georeferenced data analysis offers considerable promise.
Solution of the linear regression problem using matrix correction methods in the l 1 metric
NASA Astrophysics Data System (ADS)
Gorelik, V. A.; Trembacheva (Barkalova), O. S.
2016-02-01
The linear regression problem is considered as an improper interpolation problem. The metric l 1 is used to correct (approximate) all the initial data. A probabilistic justification of this metric in the case of the exponential noise distribution is given. The original improper interpolation problem is reduced to a set of a finite number of linear programming problems. The corresponding computational algorithms are implemented in MATLAB.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
NASA Astrophysics Data System (ADS)
Bakker, Lennard F.; Ouyang, Tiancheng; Yan, Duokui; Simmons, Skyler; Roberts, Gareth E.
2010-10-01
We apply the analytic-numerical method of Roberts to determine the linear stability of time-reversible periodic simultaneous binary collision orbits in the symmetric collinear four-body problem with masses 1, m, m, 1, and also in a symmetric planar four-body problem with equal masses. In both problems, the assumed symmetries reduce the determination of linear stability to the numerical computation of a single real number. For the collinear problem, this verifies the earlier numerical results of Sweatman for linear stability with respect to collinear and symmetric perturbations.
Towards Resolving the Crab Sigma-Problem: A Linear Accelerator?
NASA Technical Reports Server (NTRS)
Contopoulos, Ioannis; Kazanas, Demosthenes; White, Nicholas E. (Technical Monitor)
2002-01-01
Using the exact solution of the axisymmetric pulsar magnetosphere derived in a previous publication and the conservation laws of the associated MHD flow, we show that the Lorentz factor of the outflowing plasma increases linearly with distance from the light cylinder. Therefore, the ratio of the Poynting to particle energy flux, generically referred to as sigma, decreases inversely proportional to distance, from a large value (typically approx. greater than 10(exp 4)) near the light cylinder to sigma approx. = 1 at a transition distance R(sub trans). Beyond this distance the inertial effects of the outflowing plasma become important and the magnetic field geometry must deviate from the almost monopolar form it attains between R(sub lc), and R(sub trans). We anticipate that this is achieved by collimation of the poloidal field lines toward the rotation axis, ensuring that the magnetic field pressure in the equatorial region will fall-off faster than 1/R(sup 2) (R being the cylindrical radius). This leads both to a value sigma = a(sub s) much less than 1 at the nebular reverse shock at distance R(sub s) (R(sub s) much greater than R(sub trans)) and to a component of the flow perpendicular to the equatorial component, as required by observation. The presence of the strong shock at R = R(sub s) allows for the efficient conversion of kinetic energy into radiation. We speculate that the Crab pulsar is unique in requiring sigma(sub s) approx. = 3 x 10(exp -3) because of its small translational velocity, which allowed for the shock distance R(sub s) to grow to values much greater than R(sub trans).
An application of a linear programing technique to nonlinear minimax problems
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1973-01-01
A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.
Gene Golub; Kwok Ko
2009-03-30
The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.
Global symmetry relations in linear and viscoplastic mobility problems
NASA Astrophysics Data System (ADS)
Kamrin, Ken; Goddard, Joe
2014-11-01
The mobility tensor of a textured surface is a homogenized effective boundary condition that describes the effective slip of a fluid adjacent to the surface in terms of an applied shear traction far above the surface. In the Newtonian fluid case, perturbation analysis yields a mobility tensor formula, which suggests that regardless of the surface texture (i.e. nonuniform hydrophobicity distribution and/or height fluctuations) the mobility tensor is always symmetric. This conjecture is verified using a Lorentz reciprocity argument. It motivates the question of whether such symmetries would arise for nonlinear constitutive relations and boundary conditions, where the mobility tensor is not a constant but a function of the applied stress. We show that in the case of a strongly dissipative nonlinear constitutive relation--one whose strain-rate relates to the stress solely through a scalar Edelen potential--and strongly dissipative surface boundary conditions--one whose hydrophobic character is described by a potential relating slip to traction--the mobility function of the surface also maintains tensorial symmetry. By extension, the same variational arguments can be applied in problems such as the permeability tensor for viscoplastic flow through porous media, and we find that similar symmetries arise. These findings could be used to simplify the characterization of viscoplastic drag in various anisotropic media. (Joe Goddard is a former graduate student of Acrivos).
Solution algorithms for non-linear singularly perturbed optimal control problems
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1983-01-01
The applicability and usefulness of several classical and other methods for solving the two-point boundary-value problem which arises in non-linear singularly perturbed optimal control are assessed. Specific algorithms of the Picard, Newton and averaging types are formally developed for this class of problem. The computational requirements associated with each algorithm are analysed and compared with the computational requirement of the method of matched asymptotic expansions. Approximate solutions to a linear and a non-linear problem are obtained by each method and compared.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Bramble, J.H.; Pasciak, J.E.
1981-01-01
The linearized scalar potential formulation of the magnetostatic field problem is considered. The approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary condition can also be satisfied in the parametric framework. The reformulated problem is discretized by finite element techniques and a discrete parametric problem is solved by conjugate gradient iteration. This approach decouples the problem in that only standard Neumann type elliptic finite element systems on separate bounded domains need be solved. The boundary conditions at infinity and the interface conditions are satisfied during the boundary parametric iteration.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1986-01-01
An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.
NASA Astrophysics Data System (ADS)
Diwaker; Chakraborty, Aniruddha
2015-12-01
In the present work we have reported a simple exact analytical solution to the curve crossing problem of two linear diabatic potentials by transfer matrix method. Our problem assumes the crossing of two linear diabatic potentials which are coupled to each other by an arbitrary coupling (in contrast to linear potentials in the vicinity of crossing points) and for numerical calculation purposes this arbitrary coupling is taken as Gaussian coupling which is further expressed as a collection of Dirac delta functions. Further we calculated the transition probability from one diabatic potential to another by the use of this method.
Newton's method for large bound-constrained optimization problems.
Lin, C.-J.; More, J. J.; Mathematics and Computer Science
1999-01-01
We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.
Illusion of Linearity in Geometry: Effect in Multiple-Choice Problems
ERIC Educational Resources Information Center
Vlahovic-Stetic, Vesna; Pavlin-Bernardic, Nina; Rajter, Miroslav
2010-01-01
The aim of this study was to examine if there is a difference in the performance on non-linear problems regarding age, gender, and solving situation, and whether the multiple-choice answer format influences students' thinking. A total of 112 students, aged 15-16 and 18-19, were asked to solve problems for which solutions based on proportionality…
Fast and Robust Newton strategies for non-linear geodynamics problems
NASA Astrophysics Data System (ADS)
Le Pourhiet, Laetitia; May, Dave
2014-05-01
Geodynamic problems are inherently non-linear, with sources of non-inearities arising from the (i) rheology, (ii) boundary conditions and (iii) the choice of time integration scheme. We have developed a robust non-linear scheme utilizing PETSc's non-linear solver framework; SNES. Through the SNES framework, we have access to a wide range of globalization techniques. In this work we extensively use line search implementation. We explored a wide range different strategies for solving a variety of non-linear problems specific to geodynamics. In this presentation, we report of the most robust line-searching techniques which we have found for the three classes of non-linearities previously identified. Among the class of rheological non-linearities, the shear banding instability using visco-plastic flow rules is the most difficult to solve. Distinctively from its sibling, the elasto-plastic rheology, the visco-plastic rheology causes instantaneous shear localisation. As a results, decreasing time-stepping is not a viable approach to better capture the initial phase of localisation. Furthermore, return map algorithms based on a consistent tangent cannot be used as the slope of the tangent is infinite. Obtaining a converged non-linear solution to this problem only relies on the robustness non-linear solver. After presenting a Newton methodology suitable for rheological non-linearities, we examine the performance of this formulation when frictional sliding boundary conditions are introduced. We assess the robustness of the non-linear solver when applied to critical taper type problems.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
NASA Astrophysics Data System (ADS)
Schröder, Jörg; Keip, Marc-André
2012-08-01
The contribution addresses a direct micro-macro transition procedure for electromechanically coupled boundary value problems. The two-scale homogenization approach is implemented into a so-called FE2-method which allows for the computation of macroscopic boundary value problems in consideration of microscopic representative volume elements. The resulting formulation is applicable to the computation of linear as well as nonlinear problems. In the present paper, linear piezoelectric as well as nonlinear electrostrictive material behavior are investigated, where the constitutive equations on the microscale are derived from suitable thermodynamic potentials. The proposed direct homogenization procedure can also be applied for the computation of effective elastic, piezoelectric, dielectric, and electrostrictive material properties.
On high-continuity transfinite element formulations for linear-nonlinear transient thermal problems
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1987-01-01
This paper describes recent developments in the applicability of a hybrid transfinite element methodology with emphasis on high-continuity formulations for linear/nonlinear transient thermal problems. The proposed concepts furnish accurate temperature distributions and temperature gradients making use of a relatively smaller number of degrees of freedom; and the methodology is applicable to linear/nonlinear thermal problems. Characteristic features of the formulations are described in technical detail as the proposed hybrid approach combines the major advantages and modeling features of high-continuity thermal finite elements in conjunction with transform methods and classical Galerkin schemes. Several numerical test problems are evaluated and the results obtained validate the proposed concepts for linear/nonlinear thermal problems.
Some comparison of restarted GMRES and QMR for linear and nonlinear problems
Morgan, R.; Joubert, W.
1994-12-31
Comparisons are made between the following methods: QMR including its transpose-free version, restarted GMRES, and a modified restarted GMRES that uses approximate eigenvectors to improve convergence, For some problems, the modified GMRES is competitive with or better than QMR in terms of the number of matrix-vector products. Also, the GMRES methods can be much better when several similar systems of linear equations must be solved, as in the case of nonlinear problems and ODE problems.
NASA Astrophysics Data System (ADS)
Abramov, A. A.; Yukhno, L. F.
2016-07-01
A nonlinear eigenvalue problem for a linear system of ordinary differential equations is examined on a semi-infinite interval. The problem is supplemented by nonlocal conditions specified by a Stieltjes integral. At infinity, the solution must be bounded. In addition to these basic conditions, the solution must satisfy certain redundant conditions, which are also nonlocal. A numerically stable method for solving such a singular overdetermined eigenvalue problem is proposed and analyzed. The essence of the method is that this overdetermined problem is replaced by an auxiliary problem consistent with all the above conditions.
Initial-value problem for a linear ordinary differential equation of noninteger order
Pskhu, Arsen V
2011-04-30
An initial-value problem for a linear ordinary differential equation of noninteger order with Riemann-Liouville derivatives is stated and solved. The initial conditions of the problem ensure that (by contrast with the Cauchy problem) it is uniquely solvable for an arbitrary set of parameters specifying the orders of the derivatives involved in the equation; these conditions are necessary for the equation under consideration. The problem is reduced to an integral equation; an explicit representation of the solution in terms of the Wright function is constructed. As a consequence of these results, necessary and sufficient conditions for the solvability of the Cauchy problem are obtained. Bibliography: 7 titles.
Fast pricing of American options by linear programming
Dempster, M.; Hutton, J.P.
1994-12-31
This paper describes a new method for computation of the value of various American options on underlying dividend bearing securities under standard Black-Scholes assumptions. It is well known that the problem of valuing the American put can be expressed as solving an abstract linear complementarity problem in terms of a parabolic partial differential operator. Generalizing earlier work of Cryer, Dempster and Borwein for elliptic operators, we show that the American put option value function is the solution of an abstract linear programme bounded by the payoff at exercise. Different American options require only different payoff function bounds. Standard finite difference or finite element approximations to the complementarity problem lead to ordinary linear programmes. We report promising computational results for several American option types using IBM`s Optimization System Library on an RS6000/590.
Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control
Gaitsgory, Vladimir; Rossomakhine, Sergey
2015-04-15
The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.
A strictly improving linear programming alorithm based on a series of Phase 1 problems
Leichner, S.A.; Dantzig, G.B.; Davis, J.W.
1992-04-01
When used on degenerate problems, the simplex method often takes a number of degenerate steps at a particular vertex before moving to the next. In theory (although rarely in practice), the simplex method can actually cycle at such a degenerate point. Instead of trying to modify the simplex method to avoid degenerate steps, we have developed a new linear programming algorithm that is completely impervious to degeneracy. This new method solves the Phase II problem of finding an optimal solution by solving a series of Phase I feasibility problems. Strict improvement is attained at each iteration in the Phase I algorithm, and the Phase II sequence of feasibility problems has linear convergence in the number of Phase I problems. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase II algorithm were over 15% faster than the simplex method; on some problems, it was almost two times faster, and on one problem it was four times faster.
An iterative method to solve the heat transfer problem under the non-linear boundary conditions
NASA Astrophysics Data System (ADS)
Zhu, Zhenggang; Kaliske, Michael
2012-02-01
The aim of the paper is to determine the approximation of the tangential matrix for solving the non-linear heat transfer problem. Numerical model of the strongly non-linear heat transfer problem based on the theory of the finite element method is presented. The tangential matrix of the Newton method is formulated. A method to solve the heat transfer with the non-linear boundary conditions, based on the secant slope of a reference function, is developed. The contraction mapping principle is introduced to verify the convergence of this method. The application of the method is shown by two examples. Numerical results of these examples are comparable to the ones solved with the Newton method and the commercial software COMSOL for the heat transfer problem under the radiative boundary conditions.
Complementarity in Categorical Quantum Mechanics
NASA Astrophysics Data System (ADS)
Heunen, Chris
2012-07-01
We relate notions of complementarity in three layers of quantum mechanics: (i) von Neumann algebras, (ii) Hilbert spaces, and (iii) orthomodular lattices. Taking a more general categorical perspective of which the above are instances, we consider dagger monoidal kernel categories for (ii), so that (i) become (sub)endohomsets and (iii) become subobject lattices. By developing a `point-free' definition of copyability we link (i) commutative von Neumann subalgebras, (ii) classical structures, and (iii) Boolean subalgebras.
Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem
Yoo, Jaechil
1996-12-31
Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.
Weighted linear least squares problem: an interval analysis approach to rank determination
Manteuffel, T. A.
1980-08-01
This is an extension of the work in SAND--80-0655 to the weighted linear least squares problem. Given the weighted linear least squares problem WAx approx. = Wb, where W is a diagonal weighting matrix, and bounds on the uncertainty in the elements of A, we define an interval matrix A/sup I/ that contains all perturbations of A due to these uncertainties and say that the problem is rank deficient if any member of A/sup I/ is rank deficient. It is shown that, if WA = QR is the QR decomposition of WA, then Q and R/sup -1/ can be used to bound the rank of A/sup I/. A modification of the Modified Gram--Schmidt QR decomposition yields an algorithm that implements these results. The extra arithmetic is 0(MN). Numerical results show the algorithm to be effective on problems in which the weights vary greatly in magnitude.
The Use of the Fourier Transform for Solving Linear Elasticity Problems
NASA Astrophysics Data System (ADS)
Kozubek, Tomas; Mocek, Lukas
2011-11-01
This paper deals with solving linear elasticity problems using a modified fictitious domain method and an effective solver based on the discrete Fourier transform and the Schur complement reduction in combination with the null space method. The main goal is to show step by step all ingredients of the numerical solution.
NASA Astrophysics Data System (ADS)
Mocek, Lukas; Kozubek, Tomas
2011-09-01
The paper deals with the numerical solution of elliptic boundary value problems for 2D linear elasticity using the fictitious domain method in combination with the discrete Fourier transform and the FETI domain decomposition. We briefly mention the theoretical background of these methods, introduce resulting solvers, and demonstrate their efficiency on model benchmarks.
High Order Finite Difference Methods, Multidimensional Linear Problems and Curvilinear Coordinates
NASA Technical Reports Server (NTRS)
Nordstrom, Jan; Carpenter, Mark H.
1999-01-01
Boundary and interface conditions are derived for high order finite difference methods applied to multidimensional linear problems in curvilinear coordinates. The boundary and interface conditions lead to conservative schemes and strict and strong stability provided that certain metric conditions are met.
Linear Integro-differential Schroedinger and Plate Problems Without Initial Conditions
Lorenzi, Alfredo
2013-06-15
Via Carleman's estimates we prove uniqueness and continuous dependence results for the temporal traces of solutions to overdetermined linear ill-posed problems related to Schroedinger and plate equation. The overdetermination is prescribed in an open subset of the (space-time) lateral boundary.
The problem of scheduling for the linear section of a single-track railway
NASA Astrophysics Data System (ADS)
Akimova, Elena N.; Gainanov, Damir N.; Golubev, Oleg A.; Kolmogortsev, Ilya D.; Konygin, Anton V.
2016-06-01
The paper is devoted to the problem of scheduling for the linear section of a single-track railway: how to organize the flow in both directions in the most efficient way. In this paper, the authors propose an algorithm for scheduling, examine the properties of this algorithm and perform the computational experiments.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2014-04-01
The typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover problem, which is a type of integer programming (IP) problem. To deal with LP and IP using statistical mechanics, a lattice-gas model on the Erdös-Rényi random graphs is analyzed by a replica method. It is found that the LP optimal solution is typically equal to that given by IP below the critical average degree c*=e in the thermodynamic limit. The critical threshold for LP = IP extends the previous result c = 1, and coincides with the replica symmetry-breaking threshold of the IP.
Geometric tools for solving the FDI problem for linear periodic discrete-time systems
NASA Astrophysics Data System (ADS)
Longhi, Sauro; Monteriù, Andrea
2013-07-01
This paper studies the problem of detecting and isolating faults in linear periodic discrete-time systems. The aim is to design an observer-based residual generator where each residual is sensitive to one fault, whilst remaining insensitive to the other faults that can affect the system. Making use of the geometric tools, and in particular of the outer observable subspace notion, the Fault Detection and Isolation (FDI) problem is formulated and necessary and solvability conditions are given. An algorithmic procedure is described to determine the solution of the FDI problem.
NASA Technical Reports Server (NTRS)
Kent, James; Holdaway, Daniel
2015-01-01
A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.
The linearized characteristics method and its application to practical nonlinear supersonic problems
NASA Technical Reports Server (NTRS)
Ferri, Antonio
1952-01-01
The methods of characteristics has been linearized by assuming that the flow field can be represented as a basic flow field determined by nonlinearized methods and a linearized superposed flow field that accounts for small changes of boundary conditions. The method has been applied to two-dimensional rotational flow where the basic flow is potential flow and to axially symmetric problems where conical flows have been used as the basic flows. In both cases the method allows the determination of the flow field to be simplified and the numerical work to be reduced to a few calculations. The calculations of axially symmetric flow can be simplified if tabulated values of some coefficients of the conical flow are obtained. The method has also been applied to slender bodies without symmetry and to some three-dimensional wing problems where two-dimensional flow can be used as the basic flow. Both problems were unsolved before in the approximation of nonlinear flow.
Voila: A visual object-oriented iterative linear algebra problem solving environment
Edwards, H.C.; Hayes, L.J.
1994-12-31
Application of iterative methods to solve a large linear system of equations currently involves writing a program which calls iterative method subprograms from a large software package. These subprograms have complex interfaces which are difficult to use and even more difficult to program. A problem solving environment specifically tailored to the development and application of iterative methods is needed. This need will be fulfilled by Voila, a problem solving environment which provides a visual programming interface to object-oriented iterative linear algebra kernels. Voila will provide several quantum improvements over current iterative method problem solving environments. First, programming and applying iterative methods is considerably simplified through Voila`s visual programming interface. Second, iterative method algorithm implementations are independent of any particular sparse matrix data structure through Voila`s object-oriented kernels. Third, the compile-link-debug process is eliminated as Voila operates as an interpreter.
NASA Astrophysics Data System (ADS)
Gallagher, Kerry; Sambridge, Malcolm; Drijkoningen, Guy
In providing a method for solving non-linear optimization problems Monte Carlo techniques avoid the need for linearization but, in practice, are often prohibitive because of the large number of models that must be considered. A new class of methods known as Genetic Algorithms have recently been devised in the field of Artificial Intelligence. We outline the basic concept of genetic algorithms and discuss three examples. We show that, in locating an optimal model, the new technique is far superior in performance to Monte Carlo techniques in all cases considered. However, Monte Carlo integration is still regarded as an effective method for the subsequent model appraisal.
Stable computation of search directions for near-degenerate linear programming problems
Hough, P.D.
1997-03-01
In this paper, we examine stability issues that arise when computing search directions ({delta}x, {delta}y, {delta} s) for a primal-dual path-following interior point method for linear programming. The dual step {delta}y can be obtained by solving a weighted least-squares problem for which the weight matrix becomes extremely il conditioned near the boundary of the feasible region. Hough and Vavisis proposed using a type of complete orthogonal decomposition (the COD algorithm) to solve such a problem and presented stability results. The work presented here addresses the stable computation of the primal step {delta}x and the change in the dual slacks {delta}s. These directions can be obtained in a straight-forward manner, but near-degeneracy in the linear programming instance introduces ill-conditioning which can cause numerical problems in this approach. Therefore, we propose a new method of computing {delta}x and {delta}s. More specifically, this paper describes and orthogonal projection algorithm that extends the COD method. Unlike other algorithms, this method is stable for interior point methods without assuming nondegeneracy in the linear programming instance. Thus, it is more general than other algorithms on near-degenerate problems.
Linear Stability of Elliptic Lagrangian Solutions of the Planar Three-Body Problem via Index Theory
NASA Astrophysics Data System (ADS)
Hu, Xijun; Long, Yiming; Sun, Shanzhong
2014-09-01
It is well known that the linear stability of Lagrangian elliptic equilateral triangle homographic solutions in the classical planar three-body problem depends on the mass parameter and the eccentricity . We are not aware of any existing analytical method which relates the linear stability of these solutions to the two parameters directly in the full rectangle [0, 9] × [0, 1), aside from perturbation methods for e > 0 small enough, blow-up techniques for e sufficiently close to 1, and numerical studies. In this paper, we introduce a new rigorous analytical method to study the linear stability of these solutions in terms of the two parameters in the full ( β, e) range [0, 9] × [0, 1) via the ω-index theory of symplectic paths for ω belonging to the unit circle of the complex plane, and the theory of linear operators. After establishing the ω-index decreasing property of the solutions in β for fixed , we prove the existence of three curves located from left to right in the rectangle [0, 9] × [0, 1), among which two are -1 degeneracy curves and the third one is the right envelope curve of the ω-degeneracy curves, and show that the linear stability pattern of such elliptic Lagrangian solutions changes if and only if the parameter ( β, e) passes through each of these three curves. Interesting symmetries of these curves are also observed. The linear stability of the singular case when the eccentricity e approaches 1 is also analyzed in detail.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
Evaluation of boundary element methods for the EEG forward problem: Effect of linear interpolation
Schlitt, H.A.; Heller, L.; Best, E.; Ranken, D.M. ); Aaron, R. )
1995-01-01
We implement the approach for solving the boundary integral equation for the electroencephalography (EEG) forward problem proposed by de Munck, in which the electric potential varies linearly across each plane triangle of the mesh. Previous solutions have assumed the potential is constant across an element. We calculate the electric potential and systematically investigate the effect of different mesh choices and dipole locations by using a three concentric sphere head model for which there is an analytic solution. Implementing the linear interpolation approximation results in errors that are approximately half those of the same mesh when the potential is assumed to be constant, and provides a reliable method for solving the problem. 12 refs., 8 figs.
Coelho, Clarimar José; Galvão, Roberto K H; de Araújo, Mário César U; Pimentel, Maria Fernanda; da Silva, Edvan Cirino
2003-01-01
A novel strategy for the optimization of wavelet transforms with respect to the statistics of the data set in multivariate calibration problems is proposed. The optimization follows a linear semi-infinite programming formulation, which does not display local maxima problems and can be reproducibly solved with modest computational effort. After the optimization, a variable selection algorithm is employed to choose a subset of wavelet coefficients with minimal collinearity. The selection allows the building of a calibration model by direct multiple linear regression on the wavelet coefficients. In an illustrative application involving the simultaneous determination of Mn, Mo, Cr, Ni, and Fe in steel samples by ICP-AES, the proposed strategy yielded more accurate predictions than PCR, PLS, and nonoptimized wavelet regression. PMID:12767151
Solution of second order quasi-linear boundary value problems by a wavelet method
Zhang, Lei; Zhou, Youhe; Wang, Jizeng
2015-03-10
A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can even reach orders of 5.8.
The conjugate gradient method for linear ill-posed problems with operator perturbations
NASA Astrophysics Data System (ADS)
Plato, Robert
1999-03-01
We consider an ill-posed problem Ta = f* in Hilbert spaces and suppose that the linear bounded operator T is approximately available, with a known estimate for the operator perturbation at the solution. As a numerical scheme the CGNR-method is considered, that is, the classical method of conjugate gradients by Hestenes and Stiefel applied to the associated normal equations. Two a posteriori stopping rules are introduced, and convergence results are provided for the corresponding approximations, respectively. As a specific application, a parameter estimation problem is considered.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1985-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
TOPSIS approach to linear fractional bi-level MODM problem based on fuzzy goal programming
NASA Astrophysics Data System (ADS)
Dey, Partha Pratim; Pramanik, Surapati; Giri, Bibhas C.
2014-07-01
The objective of this paper is to present a technique for order preference by similarity to ideal solution (TOPSIS) algorithm to linear fractional bi-level multi-objective decision-making problem. TOPSIS is used to yield most appropriate alternative from a finite set of alternatives based upon simultaneous shortest distance from positive ideal solution (PIS) and furthest distance from negative ideal solution (NIS). In the proposed approach, first, the PIS and NIS for both levels are determined and the membership functions of distance functions from PIS and NIS of both levels are formulated. Linearization technique is used in order to transform the non-linear membership functions into equivalent linear membership functions and then normalize them. A possible relaxation on decision for both levels is considered for avoiding decision deadlock. Then fuzzy goal programming models are developed to achieve compromise solution of the problem by minimizing the negative deviational variables. Distance function is used to identify the optimal compromise solution. The paper presents a hybrid model of TOPSIS and fuzzy goal programming. An illustrative numerical example is solved to clarify the proposed approach. Finally, to demonstrate the efficiency of the proposed approach, the obtained solution is compared with the solution derived from existing methods in the literature.
Lorber, A.A.; Carey, G.F.; Bova, S.W.; Harle, C.H.
1996-12-31
The connection between the solution of linear systems of equations by iterative methods and explicit time stepping techniques is used to accelerate to steady state the solution of ODE systems arising from discretized PDEs which may involve either physical or artificial transient terms. Specifically, a class of Runge-Kutta (RK) time integration schemes with extended stability domains has been used to develop recursion formulas which lead to accelerated iterative performance. The coefficients for the RK schemes are chosen based on the theory of Chebyshev iteration polynomials in conjunction with a local linear stability analysis. We refer to these schemes as Chebyshev Parameterized Runge Kutta (CPRK) methods. CPRK methods of one to four stages are derived as functions of the parameters which describe an ellipse {Epsilon} which the stability domain of the methods is known to contain. Of particular interest are two-stage, first-order CPRK and four-stage, first-order methods. It is found that the former method can be identified with any two-stage RK method through the correct choice of parameters. The latter method is found to have a wide range of stability domains, with a maximum extension of 32 along the real axis. Recursion performance results are presented below for a model linear convection-diffusion problem as well as non-linear fluid flow problems discretized by both finite-difference and finite-element methods.
A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation
NASA Technical Reports Server (NTRS)
Lee, Chang-Ock
1996-01-01
A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.
A linear-quadratic-Gaussian control problem with innovations-feedthrough solution
NASA Technical Reports Server (NTRS)
Platzman, L. K.; Johnson, T. L.
1976-01-01
The structure of the separation-theorem solution to the standard linear-quadratic-Gaussian (LQG) control problem does not involve direct output feedback as a consequence of the form of the performance index. It is shown that the performance index may be generalized in a natural fashion so that the optimal control law involves output feedback or, equivalently, innovations feedthrough (IF). Applications where this formulation may be advantageous are indicated through an examination of properties of the IF control law.
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the
LQR problem of linear discrete time systems with nonnegative state constraints
NASA Astrophysics Data System (ADS)
Kostova, S.; Imsland, L.; Ivanov, I.
2015-10-01
In the paper the infinite-horizon Linear Quadratic Regulator (LQR) problem of linear discrete time systems with non-negative state constraints is presented. Such kind of constraints on the system determine the class of positive systems. They have big application in many fields like economics, biology, ecology, ICT and others. The standard infinite LQR-optimal state feedback law is used for solving the problem. In order to guarantee the nonnegativity of the system states, we define the admissible set of initial states. It is proven that, for each initial state from this set the nonnegative orthant is invariant set. Two cases are considered, first, when the initial state belongs to the admissible set, and the second, when the initial state does not belong to the admissible set. The procedures for solving the problem are given for two cases. In second case we use a dual-mode approach for solving the problem. The first mode is until the state trajectory enters the admissible set and after that the procedure for the first case is used. The illustrative examples are given for both cases.
Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems.
Choi, Sou-Cheng T; Saunders, Michael A
2014-02-01
We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255
A Linear Time Algorithm for the Minimum Spanning Caterpillar Problem for Bounded Treewidth Graphs
NASA Astrophysics Data System (ADS)
Dinneen, Michael J.; Khosravani, Masoud
We consider the Minimum Spanning Caterpillar Problem (MSCP) in a graph where each edge has two costs, spine (path) cost and leaf cost, depending on whether it is used as a spine or a leaf edge. The goal is to find a spanning caterpillar in which the sum of its edge costs is the minimum. We show that the problem has a linear time algorithm when a tree decomposition of the graph is given as part of the input. Despite the fast growing constant factor of the time complexity of our algorithm, it is still practical and efficient for some classes of graphs, such as outerplanar, series-parallel (K 4 minor-free), and Halin graphs. We also briefly explain how one can modify our algorithm to solve the Minimum Spanning Ring Star and the Dual Cost Minimum Spanning Tree Problems.
Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems
Choi, Sou-Cheng T.; Saunders, Michael A.
2014-01-01
We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
Boundary parametric approximation to the linearized scalar potential magnetostatic field problem
Bramble, J.H.; Pasciak, J.E.
1984-01-01
We consider the linearized scalar potential formulation of the magnetostatic field problem in this paper. Our approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary conditions can also be satisfied in the parametric framework. That is, the field in the exterior of a sphere is expanded in a harmonic series of eigenfunctions for the exterior harmonic problem. The approach is essentially a finite element method coupled with a spectral method via a boundary parametric procedure. The reformulated problem is discretized by finite element techniques which lead to a discrete parametric problem which can be solved by well conditioned iteration involving only the solution of decoupled Neumann type elliptic finite element systems and L/sup 2/ projection onto subspaces of spherical harmonics. Error and stability estimates given show exponential convergence in the degree of the spherical harmonics and optimal order convergence with respect to the finite element approximation for the resulting fields in L/sup 2/. 24 references.
Bohrian Complementarity in the Light of Kantian Teleology
NASA Astrophysics Data System (ADS)
Pringe, Hernán
2014-03-01
The Kantian influences on Bohr's thought and the relationship between the perspective of complementarity in physics and in biology seem at first sight completely unrelated issues. However, the goal of this work is to show their intimate connection. We shall see that Bohr's views on biology shed light on Kantian elements of his thought, which enables a better understanding of his complementary interpretation of quantum theory. For this purpose, we shall begin by discussing Bohr's views on the analogies concerning the epistemological situation in biology and in physics. Later, we shall compare the Bohrian and the Kantian approaches to the science of life in order to show their close connection. On this basis, we shall finally turn to the issue of complementarity in quantum theory in order to assess what we can learn about the epistemological problems in the quantum realm from a consideration of Kant's views on teleology.
Acceleration of multiple solution of a boundary value problem involving a linear algebraic system
NASA Astrophysics Data System (ADS)
Gazizov, Talgat R.; Kuksenko, Sergey P.; Surovtsev, Roman S.
2016-06-01
Multiple solution of a boundary value problem that involves a linear algebraic system is considered. New approach to acceleration of the solution is proposed. The approach uses the structure of the linear system matrix. Particularly, location of entries in the right columns and low rows of the matrix, which undergo variation due to the computing in the range of parameters, is used to apply block LU decomposition. Application of the approach is considered on the example of multiple computing of the capacitance matrix by method of moments used in numerical electromagnetics. Expressions for analytic estimation of the acceleration are presented. Results of the numerical experiments for solution of 100 linear systems with matrix orders of 1000, 2000, 3000 and different relations of variated and constant entries of the matrix show that block LU decomposition can be effective for multiple solution of linear systems. The speed up compared to pointwise LU factorization increases (up to 15) for larger number and order of considered systems with lower number of variated entries.
Reintroducing the Concept of Complementarity into Psychology
Wang, Zheng; Busemeyer, Jerome
2015-01-01
Central to quantum theory is the concept of complementarity. In this essay, we argue that complementarity is also central to the emerging field of quantum cognition. We review the concept, its historical roots in psychology, and its development in quantum physics and offer examples of how it can be used to understand human cognition. The concept of complementarity provides a valuable and fresh perspective for organizing human cognitive phenomena and for understanding the nature of measurements in psychology. In turn, psychology can provide valuable new evidence and theoretical ideas to enrich this important scientific concept. PMID:26640454
Analytical solution of boundary integral equations for 2-D steady linear wave problems
NASA Astrophysics Data System (ADS)
Chuang, J. M.
2005-10-01
Based on the Fourier transform, the analytical solution of boundary integral equations formulated for the complex velocity of a 2-D steady linear surface flow is derived. It has been found that before the radiation condition is imposed, free waves appear both far upstream and downstream. In order to cancel the free waves in far upstream regions, the eigensolution of a specific eigenvalue, which satisfies the homogeneous boundary integral equation, is found and superposed to the analytical solution. An example, a submerged vortex, is used to demonstrate the derived analytical solution. Furthermore, an analytical approach to imposing the radiation condition in the numerical solution of boundary integral equations for 2-D steady linear wave problems is proposed.
Variable-permittivity linear inverse problem for the H(sub z)-polarized case
NASA Technical Reports Server (NTRS)
Moghaddam, M.; Chew, W. C.
1993-01-01
The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.
On Linear Instability and Stability of the Rayleigh-Taylor Problem in Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Jiang, Fei; Jiang, Song
2015-12-01
We investigate the stabilizing effects of the magnetic fields in the linearized magnetic Rayleigh-Taylor (RT) problem of a nonhomogeneous incompressible viscous magnetohydrodynamic fluid of zero resistivity in the presence of a uniform gravitational field in a three-dimensional bounded domain, in which the velocity of the fluid is non-slip on the boundary. By adapting a modified variational method and careful deriving a priori estimates, we establish a criterion for the instability/stability of the linearized problem around a magnetic RT equilibrium state. In the criterion, we find a new phenomenon that a sufficiently strong horizontal magnetic field has the same stabilizing effect as that of the vertical magnetic field on growth of the magnetic RT instability. In addition, we further study the corresponding compressible case, i.e., the Parker (or magnetic buoyancy) problem, for which the strength of a horizontal magnetic field decreases with height, and also show the stabilizing effect of a sufficiently large magnetic field.
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M; Derocher, Andrew E; Lewis, Mark A; Jonsen, Ian D; Mills Flemming, Joanna
2016-01-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results. PMID:27220686
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-01-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results. PMID:27220686
Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems
Avron, Haim; Ng, Esmond G.; Toledo, Sivan
2008-03-21
We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we can drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.
On the classical solution to the linear-constrained minimum energy problem
NASA Astrophysics Data System (ADS)
Boissaux, Marc; Schiltz, Jang
2012-02-01
Minimum energy problems involving linear systems with quadratic performance criteria are classical in optimal control theory. The case where controls are constrained is discussed in Athans and Falb (1966) [Athans, M. and Falb, P.L. (1966), Optimal Control: An Introduction to the Theory and Its Applications, New York: McGraw-Hill Book Co.] who obtain a componentwise optimal control expression involving a saturation function expression. We show why the given expression is not generally optimal in the case where the dimension of the control is greater than one and provide a numerical counterexample.
Short communication : a linear assignment approach for the least-squares protein morphing problem.
Anitescu, M.; Park, S.; Mathematics and Computer Science
2009-02-01
This work addresses the computation of free-energy differences between protein conformations by using morphing (i.e., transformation) of a source conformation into a target conformation. To enhance the morphing procedure, we employ permutations of atoms: we seek to find the permutation s that minimizes the mean-square distance traveled by the atoms. Instead of performing this combinatorial search in the space of permutations, we show that the best permutation can be found by solving a linear assignment problem. We demonstrate that the use of such optimal permutations significantly improves the efficiency of the free-energy computation.
A Vector Study of Linearized Supersonic Flow Applications to Nonplanar Problems
NASA Technical Reports Server (NTRS)
Martin, John C
1953-01-01
A vector study of the partial-differential equation of steady linearized supersonic flow is presented. General expressions which relate the velocity potential in the stream to the conditions on the disturbing surfaces, are derived. In connection with these general expressions the concept of the finite part of an integral is discussed. A discussion of problems dealing with planar bodies is given and the conditions for the solution to be unique are investigated. Problems concerning nonplanar systems are investigated, and methods are derived for the solution of some simple nonplanar bodies. The surface pressure distribution and the damping in roll are found for rolling tails consisting of four, six, and eight rectangular fins for the Mach number range where the region of interference between adjacent fins does not affect the fin tips.
A linear analytical boundary element method (BEM) for 2D homogeneous potential problems
NASA Astrophysics Data System (ADS)
Friedrich, Jürgen
2002-06-01
The solution of potential problems is not only fundamental for geosciences, but also an essential part of related subjects like electro- and fluid-mechanics. In all fields, solution algorithms are needed that should be as accurate as possible, robust, simple to program, easy to use, fast and small in computer memory. An ideal technique to fulfill these criteria is the boundary element method (BEM) which applies Green's identities to transform volume integrals into boundary integrals. This work describes a linear analytical BEM for 2D homogeneous potential problems that is more robust and precise than numerical methods because it avoids numerical schemes and coordinate transformations. After deriving the solution algorithm, the introduced approach is tested against different benchmarks. Finally, the gained method was incorporated into an existing software program described before in this journal by the same author.
LINPRO: Linear inverse problem library for data contaminated by statistical noise
NASA Astrophysics Data System (ADS)
Magierski, Piotr; Wlazłowski, Gabriel
2012-10-01
The library LINPRO which provides the solution to the linear inverse problem for data contaminated by a statistical noise is presented. The library makes use of two methods: Maximum Entropy Method and Singular Value Decomposition. As an example it has been applied to perform an analytic continuation of the imaginary time propagator obtained within the Quantum Monte Carlo method. Program summary Program title: LINPRO v1.0. Catalogue identifier: AEMT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU Lesser General Public Licence. No. of lines in distributed program, including test data, etc.: 110620. No. of bytes in distributed program, including test data, etc.: 3208593. Distribution format: tar.gz. Programming language: C++. Computer: LINPRO library should compile on any computing system that has C++ compiler. Operating system: Linux or Unix. Classification: 4.9, 4.12, 4.13. External routines: OPT++: An Object-Oriented Nonlinear Optimization Library [1] (included in the distribution). Nature of problem: LINPRO library solves linear inverse problems with an arbitrary kernel and arbitrary external constraints imposed on the solution. Solution method: LINPRO library implements two complementary methods: Maximum Entropy Method and SVD method. Additional comments: Tested with compilers-GNU Compiler g++, Intel Compiler icpc. Running time: Problem dependent, ranging from seconds to hours. Each of the examples takes less than a minute to run. References: [1] OPT++: An Object-Oriented Nonlinear Optimization Library, https://software.sandia.gov/opt++/.
NASA Astrophysics Data System (ADS)
Lachhwani, Kailash; Nehra, Suresh
2015-09-01
In this paper, we present modified fuzzy goal programming (FGP) approach and generalized MATLAB program for solving multi-level linear fractional programming problems (ML-LFPPs) based on with some major modifications in earlier FGP algorithms. In proposed modified FGP approach, solution preferences by the decision makers at each level are not considered and fuzzy goal for the decision vectors is defined using individual best solutions. The proposed modified algorithm as well as MATLAB program simplifies the earlier algorithm on ML-LFPP by eliminating solution preferences by the decision makers at each level, thereby avoiding difficulties associate with multi-level programming problems and decision deadlock situation. The proposed modified technique is simple, efficient and requires less computational efforts in comparison of earlier FGP techniques. Also, the proposed coding of generalized MATLAB program based on this modified approach for solving ML-LFPPs is the unique programming tool toward dealing with such complex mathematical problems with MATLAB. This software based program is useful and user can directly obtain compromise optimal solution of ML-LFPPs with it. The aim of this paper is to present modified FGP technique and generalized MATLAB program to obtain compromise optimal solution of ML-LFP problems in simple and efficient manner. A comparative analysis is also carried out with numerical example in order to show efficiency of proposed modified approach and to demonstrate functionality of MATLAB program.
Linear stability of the Couette flow of a vibrationally excited gas. 2. viscous problem
NASA Astrophysics Data System (ADS)
Grigor'ev, Yu. N.; Ershov, I. V.
2016-03-01
Based on the linear theory, stability of viscous disturbances in a supersonic plane Couette flow of a vibrationally excited gas described by a system of linearized equations of two-temperature gas dynamics including shear and bulk viscosity is studied. It is demonstrated that two sets are identified in the spectrum of the problem of stability of plane waves, similar to the case of a perfect gas. One set consists of viscous acoustic modes, which asymptotically converge to even and odd inviscid acoustic modes at high Reynolds numbers. The eigenvalues from the other set have no asymptotic relationship with the inviscid problem and are characterized by large damping decrements. Two most unstable viscous acoustic modes (I and II) are identified; the limits of these modes were considered previously in the inviscid approximation. It is shown that there are domains in the space of parameters for both modes, where the presence of viscosity induces appreciable destabilization of the flow. Moreover, the growth rates of disturbances are appreciably greater than the corresponding values for the inviscid flow, while thermal excitation in the entire considered range of parameters increases the stability of the viscous flow. For a vibrationally excited gas, the critical Reynolds number as a function of the thermal nonequilibrium degree is found to be greater by 12% than for a perfect gas.
Complementarity relations for quantum coherence
NASA Astrophysics Data System (ADS)
Cheng, Shuming; Hall, Michael J. W.
2015-10-01
Various measures have been suggested recently for quantifying the coherence of a quantum state with respect to a given basis. We first use two of these, the l1-norm and relative entropy measures, to investigate tradeoffs between the coherences of mutually unbiased bases. Results include relations between coherence, uncertainty, and purity; tight general bounds restricting the coherences of mutually unbiased bases; and an exact complementarity relation for qubit coherences. We further define the average coherence of a quantum state. For the l1-norm measure this is related to a natural "coherence radius" for the state and leads to a conjecture for an l2-norm measure of coherence. For relative entropy the average coherence is determined by the difference between the von Neumann entropy and the quantum subentropy of the state and leads to upper bounds for the latter quantity. Finally, we point out that the relative entropy of coherence is a special case of G-asymmetry, which immediately yields several operational interpretations in contexts as diverse as frame alignment, quantum communication, and metrology, and suggests generalizing the property of quantum coherence to arbitrary groups of physical transformations.
An improved exploratory search technique for pure integer linear programming problems
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
Carey, G.F.; Young, D.M.
1993-12-31
The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken. PMID:27301006
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
NASA Astrophysics Data System (ADS)
Dotti, Gustavo; Gleiser, Reinaldo J.
2009-11-01
The coupled equations for the scalar modes of the linearized Einstein equations around Schwarzschild's spacetime were reduced by Zerilli to a (1+1) wave equation \\partial ^2 \\Psi _z / \\partial t^2 + {\\cal H} \\Psi _z =0 , where {\\cal H} = -\\partial ^2 / \\partial x^2 + V(x) is the Zerilli 'Hamiltonian' and x is the tortoise radial coordinate. From its definition, for smooth metric perturbations the field Ψz is singular at rs = -6M/(ell - 1)(ell +2), with ell being the mode harmonic number. The equation Ψz obeys is also singular, since V has a second-order pole at rs. This is irrelevant to the black hole exterior stability problem, where r > 2M > 0, and rs < 0, but it introduces a non-trivial problem in the naked singular case where M < 0, then rs > 0, and the singularity appears in the relevant range of r (0 < r < ∞). We solve this problem by developing a new approach to the evolution of the even mode, based on a new gauge invariant function, \\hat{\\Psi} , that is a regular function of the metric perturbation for any value of M. The relation of \\hat{\\Psi} to Ψz is provided by an intertwiner operator. The spatial pieces of the (1 + 1) wave equations that \\hat{\\Psi} and Ψz obey are related as a supersymmetric pair of quantum Hamiltonians {\\cal H} and \\hat{\\cal H} . For M<0, \\hat{\\cal H} has a regular potential and a unique self-adjoint extension in a domain {\\cal D} defined by a physically motivated boundary condition at r = 0. This allows us to address the issue of evolution of gravitational perturbations in this non-globally hyperbolic background. This formulation is used to complete the proof of the linear instability of the Schwarzschild naked singularity, by showing that a previously found unstable mode belongs to a complete basis of \\hat{\\cal H} in {\\cal D} , and thus is excitable by generic initial data. This is further illustrated by numerically solving the linearized equations for suitably chosen initial data.
NASA Astrophysics Data System (ADS)
Shaldanbayev, Amir; Shomanbayeva, Manat; Kopzhassarova, Asylzat
2016-08-01
This paper proposes a fundamentally new method of investigation of a singularly perturbed Cauchy problem for a linear system of ordinary differential equations based on the spectral theory of equations with deviating argument.
Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem
NASA Technical Reports Server (NTRS)
Lu, Huei-Iin; Robertson, Franklin R.
1999-01-01
A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.
First-order system least squares for the pure traction problem in planar linear elasticity
Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.
1996-12-31
This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.
An exact solution to a certain non-linear random vibration problem
NASA Astrophysics Data System (ADS)
Dimentberg, M. F.
A single-degree-of-freedom system with a special type of nonlinear damping and both external and parametric white-noise excitations is considered. For the special case, when the intensities of coordinates and velocity modulation satisfy a certain condition an exact analytical solution is obtained to the corresponding stationary Fokker-Planck-Kolmogorov equation yielding an expression for joint probability density of coordinate and velocity. This solution is analyzed particularly in connection with stochastic stability problem for the corresponding linear system; certain implications are illustrated for the system, which is stable with respect to probability but unstable in the mean square. The solution obtained may be used to check different approximate methods for analysis of systems with randomly varying parameters.
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
Wollaber, Allan B.; Larsen, Edward W.
2011-02-20
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.
NASA Astrophysics Data System (ADS)
Kuchment, Peter; Steinhauer, Dustin
2015-12-01
In the previous paper (Kuchment and Steinhauer in Inverse Probl 28(8):084007, 2012), the authors introduced a simple procedure that allows one to detect whether and explain why internal information arising in several novel coupled physics (hybrid) imaging modalities could turn extremely unstable techniques, such as optical tomography or electrical impedance tomography, into stable, good-resolution procedures. It was shown that in all cases of interest, the Fréchet derivative of the forward mapping is a pseudo-differential operator with an explicitly computable principal symbol. If one can set up the imaging procedure in such a way that the symbol is elliptic, this would indicate that the problem was stabilized. In the cases when the symbol is not elliptic, the technique suggests how to change the procedure (e.g., by adding extra measurements) to achieve ellipticity. In this article, we consider the situation arising in acousto-optical tomography (also called ultrasound modulated optical tomography), where the internal data available involves the Green's function, and thus depends globally on the unknown parameter(s) of the equation and its solution. It is shown that the technique of (Kuchment and Steinhauer in Inverse Probl 28(8):084007, 2012) can be successfully adopted to this situation as well. A significant part of the article is devoted to results on generic uniqueness for the linearized problem in a variety of situations, including those arising in acousto-electric and quantitative photoacoustic tomography.
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.
1991-01-01
The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.
Interval analysis approach to rank determination in linear least squares problems
Manteuffel, T.A.
1980-06-01
The linear least-squares problem Ax approx. = b has a unique solution only if the matrix A has full column rank. Numerical rank determination is difficult, especially in the presence of uncertainties in the elements of A. This paper proposes an interval analysis approach. A set of matrices A/sup I/ is defined that contains all possible perturbations of A due to uncertainties; A/sup I/ is said to be rank deficient if any member of A/sup I/ is rank deficient. A modification to the Q-R decomposition method of solution of the least-squares problem allows a determination of the rank of A/sup I/ and a partial interval analysis of the solution vector x. This procedure requires the computation of R/sup -1/. Another modification is proposed which determines the rank of A/sup I/ without computing R/sup -1/. The additional computational effort is O(N/sup 2/), where N is the column dimension of A. 4 figures.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.
2016-01-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A
2016-01-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971
NASA Astrophysics Data System (ADS)
Zhou, Qinglong; Long, Yiming
2015-06-01
In this paper, we prove that the linearized system of elliptic triangle homographic solution of planar charged three-body problem can be transformed to that of the elliptic equilateral triangle solution of the planar classical three-body problem. Consequently, the results of Martínez, Samà and Simó (2006) [15] and results of Hu, Long and Sun (2014) [6] can be applied to these solutions of the charged three-body problem to get their linear stability.
Abgrall, Rémi; Congedo, Pietro Marco
2013-02-15
This paper deals with the formulation of a semi-intrusive (SI) method allowing the computation of statistics of linear and non linear PDEs solutions. This method shows to be very efficient to deal with probability density function of whatsoever form, long-term integration and discontinuities in stochastic space. Given a stochastic PDE where randomness is defined on Ω, starting from (i) a description of the solution in term of a space variables, (ii) a numerical scheme defined for any event ω∈Ω and (iii) a (family) of random variables that may be correlated, the solution is numerically described by its conditional expectancies of point values or cell averages and its evaluation constructed from the deterministic scheme. One of the tools is a tessellation of the random space as in finite volume methods for the space variables. Then, using these conditional expectancies and the geometrical description of the tessellation, a piecewise polynomial approximation in the random variables is computed using a reconstruction method that is standard for high order finite volume space, except that the measure is no longer the standard Lebesgue measure but the probability measure. This reconstruction is then used to formulate a scheme on the numerical approximation of the solution from the deterministic scheme. This new approach is said semi-intrusive because it requires only a limited amount of modification in a deterministic solver to quantify uncertainty on the state when the solver includes uncertain variables. The effectiveness of this method is illustrated for a modified version of Kraichnan–Orszag three-mode problem where a discontinuous pdf is associated to the stochastic variable, and for a nozzle flow with shocks. The results have been analyzed in terms of accuracy and probability measure flexibility. Finally, the importance of the probabilistic reconstruction in the stochastic space is shown up on an example where the exact solution is computable, the viscous
NASA Astrophysics Data System (ADS)
Abgrall, Rémi; Congedo, Pietro Marco
2013-02-01
This paper deals with the formulation of a semi-intrusive (SI) method allowing the computation of statistics of linear and non linear PDEs solutions. This method shows to be very efficient to deal with probability density function of whatsoever form, long-term integration and discontinuities in stochastic space. Given a stochastic PDE where randomness is defined on Ω, starting from (i) a description of the solution in term of a space variables, (ii) a numerical scheme defined for any event ω∈Ω and (iii) a (family) of random variables that may be correlated, the solution is numerically described by its conditional expectancies of point values or cell averages and its evaluation constructed from the deterministic scheme. One of the tools is a tessellation of the random space as in finite volume methods for the space variables. Then, using these conditional expectancies and the geometrical description of the tessellation, a piecewise polynomial approximation in the random variables is computed using a reconstruction method that is standard for high order finite volume space, except that the measure is no longer the standard Lebesgue measure but the probability measure. This reconstruction is then used to formulate a scheme on the numerical approximation of the solution from the deterministic scheme. This new approach is said semi-intrusive because it requires only a limited amount of modification in a deterministic solver to quantify uncertainty on the state when the solver includes uncertain variables. The effectiveness of this method is illustrated for a modified version of Kraichnan-Orszag three-mode problem where a discontinuous pdf is associated to the stochastic variable, and for a nozzle flow with shocks. The results have been analyzed in terms of accuracy and probability measure flexibility. Finally, the importance of the probabilistic reconstruction in the stochastic space is shown up on an example where the exact solution is computable, the viscous
Self-complementarity of messenger RNA's of periodic proteins
NASA Technical Reports Server (NTRS)
Ycas, M.
1973-01-01
It is shown that the mRNA's of three periodic proteins, collagen, keratin and freezing point depressing glycoproteins show a marked degree of self-complementarity. The possible origin of this self-complementarity is discussed.
A stabilized complementarity formulation for nonlinear analysis of 3D bimodular materials
NASA Astrophysics Data System (ADS)
Zhang, L.; Zhang, H. W.; Wu, J.; Yan, B.
2016-06-01
Bi-modulus materials with different mechanical responses in tension and compression are often found in civil, composite, and biological engineering. Numerical analysis of bimodular materials is strongly nonlinear and convergence is usually a problem for traditional iterative schemes. This paper aims to develop a stabilized computational method for nonlinear analysis of 3D bimodular materials. Based on the parametric variational principle, a unified constitutive equation of 3D bimodular materials is proposed, which allows the eight principal stress states to be indicated by three parametric variables introduced in the principal stress directions. The original problem is transformed into a standard linear complementarity problem (LCP) by the parametric virtual work principle and a quadratic programming algorithm is developed by solving the LCP with the classic Lemke's algorithm. Update of elasticity and stiffness matrices is avoided and, thus, the proposed algorithm shows an excellent convergence behavior compared with traditional iterative schemes. Numerical examples show that the proposed method is valid and can accurately analyze mechanical responses of 3D bimodular materials. Also, stability of the algorithm is greatly improved.
A stabilized complementarity formulation for nonlinear analysis of 3D bimodular materials
NASA Astrophysics Data System (ADS)
Zhang, L.; Zhang, H. W.; Wu, J.; Yan, B.
2015-10-01
Bi-modulus materials with different mechanical responses in tension and compression are often found in civil, composite, and biological engineering. Numerical analysis of bimodular materials is strongly nonlinear and convergence is usually a problem for traditional iterative schemes. This paper aims to develop a stabilized computational method for nonlinear analysis of 3D bimodular materials. Based on the parametric variational principle, a unified constitutive equation of 3D bimodular materials is proposed, which allows the eight principal stress states to be indicated by three parametric variables introduced in the principal stress directions. The original problem is transformed into a standard linear complementarity problem (LCP) by the parametric virtual work principle and a quadratic programming algorithm is developed by solving the LCP with the classic Lemke's algorithm. Update of elasticity and stiffness matrices is avoided and, thus, the proposed algorithm shows an excellent convergence behavior compared with traditional iterative schemes. Numerical examples show that the proposed method is valid and can accurately analyze mechanical responses of 3D bimodular materials. Also, stability of the algorithm is greatly improved.
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems
NASA Astrophysics Data System (ADS)
Puzyrev, Vladimir; Koric, Seid; Wilkin, Scott
2016-04-01
High performance computing is absolutely necessary for large-scale geophysical simulations. In order to obtain a realistic image of a geologically complex area, industrial surveys collect vast amounts of data making the computational cost extremely high for the subsequent simulations. A major computational bottleneck of modeling and inversion algorithms is solving the large sparse systems of linear ill-conditioned equations in complex domains with multiple right hand sides. Recently, parallel direct solvers have been successfully applied to multi-source seismic and electromagnetic problems. These methods are robust and exhibit good performance, but often require large amounts of memory and have limited scalability. In this paper, we evaluate modern direct solvers on large-scale modeling examples that previously were considered unachievable with these methods. Performance and scalability tests utilizing up to 65,536 cores on the Blue Waters supercomputer clearly illustrate the robustness, efficiency and competitiveness of direct solvers compared to iterative techniques. Wide use of direct methods utilizing modern parallel architectures will allow modeling tools to accurately support multi-source surveys and 3D data acquisition geometries, thus promoting a more efficient use of the electromagnetic methods in geophysics.
Taming the non-linearity problem in GPR full-waveform inversion for high contrast media
NASA Astrophysics Data System (ADS)
Meles, Giovanni; Greenhalgh, Stewart; van der Kruk, Jan; Green, Alan; Maurer, Hansruedi
2012-03-01
We present a new algorithm for the inversion of full-waveform ground-penetrating radar (GPR) data. It is designed to tame the non-linearity issue that afflicts inverse scattering problems, especially in high contrast media. We first investigate the limitations of current full-waveform time-domain inversion schemes for GPR data and then introduce a much-improved approach based on a combined frequency-time-domain analysis. We show by means of several synthetic tests and theoretical considerations that local minima trapping (common in full bandwidth time-domain inversion) can be avoided by starting the inversion with only the low frequency content of the data. Resolution associated with the high frequencies can then be achieved by progressively expanding to wider bandwidths as the iterations proceed. Although based on a frequency analysis of the data, the new method is entirely implemented by means of a time-domain forward solver, thus combining the benefits of both frequency-domain (low frequency inversion conveys stability and avoids convergence to a local minimum; whereas high frequency inversion conveys resolution) and time-domain methods (simplicity of interpretation and recognition of events; ready availability of FDTD simulation tools).
Taming the non-linearity problem in GPR full-waveform inversion for high contrast media
NASA Astrophysics Data System (ADS)
Meles, Giovanni; Greenhalgh, Stewart; van der Kruk, Jan; Green, Alan; Maurer, Hansruedi
2011-02-01
We present a new algorithm for the inversion of full-waveform ground-penetrating radar (GPR) data. It is designed to tame the non-linearity issue that afflicts inverse scattering problems, especially in high contrast media. We first investigate the limitations of current full-waveform time-domain inversion schemes for GPR data and then introduce a much-improved approach based on a combined frequency-time-domain analysis. We show by means of several synthetic tests and theoretical considerations that local minima trapping (common in full bandwidth time-domain inversion) can be avoided by starting the inversion with only the low frequency content of the data. Resolution associated with the high frequencies can then be achieved by progressively expanding to wider bandwidths as the iterations proceed. Although based on a frequency analysis of the data, the new method is entirely implemented by means of a time-domain forward solver, thus combining the benefits of both frequency-domain (low frequency inversion conveys stability and avoids convergence to a local minimum; whereas high frequency inversion conveys resolution) and time-domain methods (simplicity of interpretation and recognition of events; ready availability of FDTD simulation tools).
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
Aksenov, V. L.; Kiselev, M. A.
2010-12-15
General problems of the complementarity of different physical methods and specific features of the interaction between neutron and matter and neutron diffraction with respect to the time of flight are discussed. The results of studying the kinetics of structural changes in lipid membranes under hydration and self-assembly of the lipid bilayer in the presence of a detergent are reported. The possibilities of the complementarity of neutron diffraction and X-ray synchrotron radiation and developing a free-electron laser are noted.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
Anastassi, Z. A.; Simos, T. E.
2010-09-30
We develop a new family of explicit symmetric linear multistep methods for the efficient numerical solution of the Schroedinger equation and related problems with oscillatory solution. The new methods are trigonometrically fitted and have improved intervals of periodicity as compared to the corresponding classical method with constant coefficients and other methods from the literature. We also apply the methods along with other known methods to real periodic problems, in order to measure their efficiency.
A nodal inverse problem for a quasi-linear ordinary differential equation in the half-line
NASA Astrophysics Data System (ADS)
Pinasco, Juan P.; Scarola, Cristian
2016-07-01
In this paper we study an inverse problem for a quasi-linear ordinary differential equation with a monotonic weight in the half-line. First, we find the asymptotic behavior of the singular eigenvalues, and we obtain a Weyl-type asymptotics imposing an appropriate integrability condition on the weight. Then, we investigate the inverse problem of recovering the coefficients from nodal data. We show that any dense subset of nodes of the eigenfunctions is enough to recover the weight.
NASA Astrophysics Data System (ADS)
Jain, Ruchika; Sinha, Deepa
2014-09-01
The non-linear stability of L 4 in the restricted three-body problem when both primaries are finite straight segments in the presence of third and fourth order resonances has been investigated. Markeev's theorem (Markeev in Libration Points in Celestial Mechanics and Astrodynamics, 1978) is used to examine the non-linear stability for the resonance cases 2:1 and 3:1. It is found that the non-linear stability of L 4 depends on the lengths of the segments in both resonance cases. It is also found that the range of stability increases when compared with the classical restricted problem. The results have been applied in the following asteroids systems: (i) 216 Kleopatra-951 Gaspara, (ii) 9 Metis-433 Eros, (iii) 22 Kalliope-243 Ida.
Hinselmann, Georg; Rosenbaum, Lars; Jahn, Andreas; Fechner, Nikolas; Ostermann, Claude; Zell, Andreas
2011-02-28
The goal of this study was to adapt a recently proposed linear large-scale support vector machine to large-scale binary cheminformatics classification problems and to assess its performance on various benchmarks using virtual screening performance measures. We extended the large-scale linear support vector machine library LIBLINEAR with state-of-the-art virtual high-throughput screening metrics to train classifiers on whole large and unbalanced data sets. The formulation of this linear support machine has an excellent performance if applied to high-dimensional sparse feature vectors. An additional advantage is the average linear complexity in the number of non-zero features of a prediction. Nevertheless, the approach assumes that a problem is linearly separable. Therefore, we conducted an extensive benchmarking to evaluate the performance on large-scale problems up to a size of 175000 samples. To examine the virtual screening performance, we determined the chemotype clusters using Feature Trees and integrated this information to compute weighted AUC-based performance measures and a leave-cluster-out cross-validation. We also considered the BEDROC score, a metric that was suggested to tackle the early enrichment problem. The performance on each problem was evaluated by a nested cross-validation and a nested leave-cluster-out cross-validation. We compared LIBLINEAR against a Naïve Bayes classifier, a random decision forest classifier, and a maximum similarity ranking approach. These reference approaches were outperformed in a direct comparison by LIBLINEAR. A comparison to literature results showed that the LIBLINEAR performance is competitive but without achieving results as good as the top-ranked nonlinear machines on these benchmarks. However, considering the overall convincing performance and computation time of the large-scale support vector machine, the approach provides an excellent alternative to established large-scale classification approaches. PMID
NASA Astrophysics Data System (ADS)
Arioli, M.; Gratton, S.
2012-11-01
Minimum-variance unbiased estimates for linear regression models can be obtained by solving least-squares problems. The conjugate gradient method can be successfully used in solving the symmetric and positive definite normal equations obtained from these least-squares problems. Taking into account the results of Golub and Meurant (1997, 2009) [10,11], Hestenes and Stiefel (1952) [17], and Strakoš and Tichý (2002) [16], which make it possible to approximate the energy norm of the error during the conjugate gradient iterative process, we adapt the stopping criterion introduced by Arioli (2005) [18] to the normal equations taking into account the statistical properties of the underpinning linear regression problem. Moreover, we show how the energy norm of the error is linked to the χ2-distribution and to the Fisher-Snedecor distribution. Finally, we present the results of several numerical tests that experimentally validate the effectiveness of our stopping criteria.
Skill complementarity enhances heterophily in collaboration networks.
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687
Skill complementarity enhances heterophily in collaboration networks
NASA Astrophysics Data System (ADS)
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems.
Skill complementarity enhances heterophily in collaboration networks
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687
Horizons of description: Black holes and complementarity
NASA Astrophysics Data System (ADS)
Bokulich, Peter Joshua Martin
Niels Bohr famously argued that a consistent understanding of quantum mechanics requires a new epistemic framework, which he named complementarity . This position asserts that even in the context of quantum theory, classical concepts must be used to understand and communicate measurement results. The apparent conflict between certain classical descriptions is avoided by recognizing that their application now crucially depends on the measurement context. Recently it has been argued that a new form of complementarity can provide a solution to the so-called information loss paradox. Stephen Hawking argues that the evolution of black holes cannot be described by standard unitary quantum evolution, because such evolution always preserves information, while the evaporation of a black hole will imply that any information that fell into it is irrevocably lost---hence a "paradox." Some researchers in quantum gravity have argued that this paradox can be resolved if one interprets certain seemingly incompatible descriptions of events around black holes as instead being complementary. In this dissertation I assess the extent to which this black hole complementarity can be undergirded by Bohr's account of the limitations of classical concepts. I begin by offering an interpretation of Bohr's complementarity and the role that it plays in his philosophy of quantum theory. After clarifying the nature of classical concepts, I offer an account of the limitations these concepts face, and argue that Bohr's appeal to disturbance is best understood as referring to these conceptual limits. Following preparatory chapters on issues in quantum field theory and black hole mechanics, I offer an analysis of the information loss paradox and various responses to it. I consider the three most prominent accounts of black hole complementarity and argue that they fail to offer sufficient justification for the proposed incompatibility between descriptions. The lesson that emerges from this
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem.
Hoyas, S; Fajardo, P; Pérez-Quiles, M J
2016-04-01
A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr=1.2). The influence of the aspect ratio (Γ) and gravity, through the Bond number (Bo), in the linear stability of the flow are analyzed together. Two different regions in the Γ-Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d. The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations. PMID:27176388
Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem
NASA Astrophysics Data System (ADS)
Hoyas, S.; Fajardo, P.; Pérez-Quiles, M. J.
2016-04-01
A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr =1.2 ). The influence of the aspect ratio (Γ ) and gravity, through the Bond number (Bo ), in the linear stability of the flow are analyzed together. Two different regions in the Γ -Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r ). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d . The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations.
Epistemological Dimensions in Niels Bohr's Conceptualization of Complementarity
NASA Astrophysics Data System (ADS)
Derry, Gregory
2008-03-01
Contemporary explications of quantum theory are uniformly ahistorical in their accounts of complementarity. Such accounts typically present complementarity as a physical principle that prohibits simultaneous measurements of certain dynamical quantities or behaviors, attributing this principle to Niels Bohr. This conceptualization of complementarity, however, is virtually devoid of content and is only marginally related to Bohr's actual writing on the topic. Instead, what Bohr presented was a subtle and complex epistemological argument in which complementarity is a shorthand way to refer to an inclusive framework for the logical analysis of ideas. The important point to notice, historically, is that Bohr's work involving complementarity is not intended to be an improvement or addition to a particular physical theory (quantum mechanics), which Bohr regarded as already complete. Bohr's work involving complementarity is actually an argument related to the goals, meaning, and limitations of physical theory itself, grounded in deep epistemological considerations stemming from the fundamental discontinuity of nature on a microscopic scale.
NASA Technical Reports Server (NTRS)
Sain, M. K.; Antsaklis, P. J.; Gejji, R. R.; Wyman, B. F.; Peczkowski, J. L.
1981-01-01
Zames (1981) has observed that there is, in general, no 'separation principle' to guarantee optimality of a division between control law design and filtering of plant uncertainty. Peczkowski and Sain (1978) have solved a model matching problem using transfer functions. Taking into consideration this investigation, Peczkowski et al. (1979) proposed the Total Synthesis Problem (TSP), wherein both the command/output-response and command/control-response are to be synthesized, subject to the plant constraint. The TSP concept can be subdivided into a Nominal Design Problem (NDP), which is not dependent upon specific controller structures, and a Feedback Synthesis Problem (FSP), which is. Gejji (1980) found that NDP was characterized in terms of the plant structural matrices and a single, 'good' transfer function matrix. Sain et al. (1981) have extended this NDP work. The present investigation is concerned with a study of FSP for the unity feedback case. NDP, together with feedback synthesis, is understood as a Total Synthesis Problem.
Constructive Processes in Linear Order Problems Revealed by Sentence Study Times
ERIC Educational Resources Information Center
Mynatt, Barbee T.; Smith, Kirk H.
1977-01-01
This research was a further test of the theory of constructive processes proposed by Foos, Smith, Sabol, and Mynatt (1976) to account for differences among presentation orders in the construction of linear orders. This theory is composed of different series of mental operations that must be performed when an order relationship is integrated with…
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Wang, C.
1990-01-01
The convergence of solutions to the discrete or sampled time linear quadratic regulator problem and associated Riccati equation for infinite dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero (infinity) is established. Both the finite and infinite time horizon problems are studied. In the finite time horizon case, strong continuity of the operators which define the control system and performance index together with a stability and consistency condition on the sampling scheme are required. For the infinite time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary of delay system, and a flexible beam are presented and discussed.
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Wang, C.
1992-01-01
The convergence of solutions to the discrete- or sampled-time linear quadratic regulator problem and associated Riccati equation for infinite-dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero(infinity) is established. Both the finite-and infinite-time horizon problems are studied. In the finite-time horizon case, strong continuity of the operators that define the control system and performance index, together with a stability and consistency condition on the sampling scheme are required. For the infinite-time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary or delay system, and a flexible beam are presented and discussed.
Zhuk, Sergiy
2013-10-15
In this paper we present Kalman duality principle for a class of linear Differential-Algebraic Equations (DAE) with arbitrary index and time-varying coefficients. We apply it to an ill-posed minimax control problem with DAE constraint and derive a corresponding dual control problem. It turns out that the dual problem is ill-posed as well and so classical optimality conditions are not applicable in the general case. We construct a minimizing sequence u-circumflex{sub {epsilon}} for the dual problem applying Tikhonov method. Finally we represent u-circumflex{sub {epsilon}} in the feedback form using Riccati equation on a subspace which corresponds to the differential part of the DAE.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
NASA Astrophysics Data System (ADS)
Eghnam, Karam M.; Sheta, Alaa F.
2008-06-01
Development of accurate models is necessary in critical applications such as prediction. In this paper, a solution to the stock prediction problem of the Barents Sea capelin is introduced using Artificial Neural Network (ANN) and Multiple Linear model Regression (MLR) models. The Capelin stock in the Barents Sea is one of the largest in the world. It normally maintained a fishery with annual catches of up to 3 million tons. The Capelin stock problem has an impact in the fish stock development. The proposed prediction model was developed using an ANNs with their weights adapted using Genetic Algorithm (GA). The proposed model was compared to traditional linear model the MLR. The results showed that the ANN-GA model produced an overall accuracy of 21% better than the MLR model.
A convex complementarity approach for simulating large granular flows.
Tasora, A.; Anitescu, M.; Mathematics and Computer Science; Univ. degli Studi di Parma
2010-07-01
Aiming at the simulation of dense granular flows, we propose and test a numerical method based on successive convex complementarity problems. This approach originates from a multibody description of the granular flow: all the particles are simulated as rigid bodies with arbitrary shapes and frictional contacts. Unlike the discrete element method (DEM), the proposed approach does not require small integration time steps typical of stiff particle interaction; this fact, together with the development of optimized algorithms that can run also on parallel computing architectures, allows an efficient application of the proposed methodology to granular flows with a large number of particles. We present an application to the analysis of the refueling flow in pebble-bed nuclear reactors. Extensive validation of our method against both DEM and physical experiments results indicates that essential collective characteristics of dense granular flow are accurately predicted.
From Wave-Particle to Features-Event Complementarity
NASA Astrophysics Data System (ADS)
Auletta, G.; Torcal, L.
2011-12-01
The terms wave and particle are of classical origin and are inadequate in dealing with the novelties of quantum mechanics with respect to classical physics. In this paper we propose to substitute the wave-particle terminology with that of features-event complementarity. This approach aims at solving some of the problems affecting quantum-mechanics since its birth. In our terminology, features are what is responsible for one of the most characterizing aspects of quantum mechanics: quantum correlations. We suggest that an ( uninterpreted) basic ontology for quantum mechanics should be thought of as constituted by events, features and their dynamical interplay, and that its ( interpreted) theoretical ontology (made up by three classes of theoretical entities: states, observables and properties) does not isomorphically correspond to the uninterpreted ontology. Operations, i.e. concrete interventions within the physical world, like preparation, premeasurement and measurement, together with reliable inferences, assure the bridge between interpreted and uninterpreted ontology.
Cauchy problem for non-linear systems of equations in the critical case
NASA Astrophysics Data System (ADS)
Kaikina, E. I.; Naumkin, P. I.; Shishmarev, I. A.
2004-12-01
The large-time asymptotic behaviour is studied for a system of non-linear evolution dissipative equations \\displaystyle u_t+\\mathscr N(u,u)+\\mathscr Lu=0, \\qquad x\\in\\mathbb R^n, \\quad t>0, \\displaystyle u(0,x)=\\widetilde u(x), \\qquad x\\in\\mathbb R^n, where \\mathscr L is a linear pseudodifferential operator \\mathscr Lu=\\overline{\\mathscr F}_{\\xi\\to x}(L(\\xi)\\widehat u(\\xi)) and the non-linearity \\mathscr N is a quadratic pseudodifferential operator \\displaystyle \\mathscr N(u,u)=\\overline{\\mathscr F}_{\\xi\\to x}\\sum_{k,l=1}^m\\int_{\\mathbb R^n}A^{kl}(t,\\xi,y)\\widehat u_k(t,\\xi-y)\\widehat u_l(t,y)\\,dy,where \\widehat u\\equiv\\mathscr F_{x\\to\\xi}u is the Fourier transform. Under the assumptions that the initial data \\widetilde u\\in\\mathbf H^{\\beta,0}\\cap\\mathbf H^{0,\\beta}, \\beta>n/2 are sufficiently small, where \\displaystyle \\mathbf H^{n,m}=\\{\\phi\\in\\mathbf L^2:\\Vert\\langle x\\rangle^m\\lang......\\phi(x)\\Vert _{\\mathbf L^2}<\\infty\\}, \\qquad \\langle x\\rangle=\\sqrt{1+x^2}\\,,is a Sobolev weighted space, and that the total mass vector \\displaystyle M=\\int\\widetilde u(x)\\,dx\
NASA Astrophysics Data System (ADS)
Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.
2014-12-01
Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well
Linear and nonlinear pattern selection in Rayleigh-Benard stability problems
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1993-01-01
A new algorithm is introduced to compute finite-amplitude states using primitive variables for Rayleigh-Benard convection on relatively coarse meshes. The algorithm is based on a finite-difference matrix-splitting approach that separates all physical and dimensional effects into one-dimensional subsets. The nonlinear pattern selection process for steady convection in an air-filled square cavity with insulated side walls is investigated for Rayleigh numbers up to 20,000. The internalization of disturbances that evolve into coherent patterns is investigated and transient solutions from linear perturbation theory are compared with and contrasted to the full numerical simulations.
ERIC Educational Resources Information Center
Lawrence, Virginia
No longer just a user of commercial software, the 21st century teacher is a designer of interactive software based on theories of learning. This software, a comprehensive study of straightline equations, enhances conceptual understanding, sketching, graphic interpretive and word problem solving skills as well as making connections to real-life and…
ERIC Educational Resources Information Center
Stamovlasis, Dimitrios
2010-01-01
The aim of the present paper is two-fold. First, it attempts to support previous findings on the role of some psychometric variables, such as, M-capacity, the degree of field dependence-independence, logical thinking and the mobility-fixity dimension, on students' achievement in chemistry problem solving. Second, the paper aims to raise some…
Linear perturbative theory of the discrete cosmological N-body problem
Marcos, B.; Baertschiger, T.; Joyce, M.; Gabrielli, A.; Labini, F. Sylos
2006-05-15
We present a perturbative treatment of the evolution under their mutual self-gravity of particles displaced off an infinite perfect lattice, both for a static space and for a homogeneously expanding space as in cosmological N-body simulations. The treatment, analogous to that of perturbations to a crystal in solid state physics, can be seen as a discrete (i.e. particle) generalization of the perturbative solution in the Lagrangian formalism of a self-gravitating fluid. Working to linear order, we show explicitly that this fluid evolution is recovered in the limit that the initial perturbations are restricted to modes of wavelength much larger than the lattice spacing. The full spectrum of eigenvalues of the simxple cubic lattice contains both oscillatory modes and unstable modes which grow slightly faster than in the fluid limit. A detailed comparison of our perturbative treatment, at linear order, with full numerical simulations is presented, for two very different classes of initial perturbation spectra. We find that the range of validity is similar to that of the perturbative fluid approximation (i.e. up to close to ''shell-crossing''), but that the accuracy in tracing the evolution is superior. The formalism provides a powerful tool to systematically calculate discreteness effects at early times in cosmological N-body simulations.
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
A linear decomposition method for large optimization problems. Blueprint for development
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1982-01-01
A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.
NASA Astrophysics Data System (ADS)
Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano
2014-08-01
Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Lomax, Harvard
1950-01-01
Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Exact analysis to any order of the linear coupling problem in the thin lens model
Ruggiero, A.G.
1991-12-31
In this report we attempt the exact solution of the motion of a charged particle in a circular accelerator under the effects of skew quadrupole errors. We adopt the model of error distributions, lumped in locations with zero extensions. This thin-lens approximation provides an analytical insight to the problem to any order. The total solution is expressed in terms of driving terms which are actually correlation factors to several order. An application follows on the calculation and correction of tune-splitting and on the estimate of the role the higher-order terms play in the correction method.
Exact analysis to any order of the linear coupling problem in the thin lens model
Ruggiero, A.G.
1991-01-01
In this report we attempt the exact solution of the motion of a charged particle in a circular accelerator under the effects of skew quadrupole errors. We adopt the model of error distributions, lumped in locations with zero extensions. This thin-lens approximation provides an analytical insight to the problem to any order. The total solution is expressed in terms of driving terms which are actually correlation factors to several order. An application follows on the calculation and correction of tune-splitting and on the estimate of the role the higher-order terms play in the correction method.
NASA Astrophysics Data System (ADS)
Bonometto, Silvio A.; Mainini, Roberto; Macciò, Andrea V.
2015-10-01
In this first paper we discuss the linear theory and the background evolution of a new class of models we dub SCDEW: Strongly Coupled DE, plus WDM. In these models, WDM dominates today's matter density; like baryons, WDM is uncoupled. Dark energy is a scalar field Φ; its coupling to ancillary cold dark matter (CDM), whose today's density is ≪1 per cent, is an essential model feature. Such coupling, in fact, allows the formation of cosmic structures, in spite of very low WDM particle masses (˜100 eV). SCDEW models yield cosmic microwave background and linear large scale features substantially undistinguishable from ΛCDM, but thanks to the very low WDM masses they strongly alleviate ΛCDM issues on small scales, as confirmed via numerical simulations in the second associated paper. Moreover SCDEW cosmologies significantly ease the coincidence and fine tuning problems of ΛCDM and, by using a field theory approach, we also outline possible links with inflationary models. We also discuss a possible fading of the coupling at low redshifts which prevents non-linearities on the CDM component to cause computational problems. The (possible) low-z coupling suppression, its mechanism, and its consequences are however still open questions - not necessarily problems - for SCDEW models. The coupling intensity and the WDM particle mass, although being extra parameters in respect to ΛCDM, are found to be substantially constrained a priori so that, if SCDEW is the underlying cosmology, we expect most data to fit also ΛCDM predictions.
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
Extended cubic B-spline method for solving a linear system of second-order boundary value problems.
Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md
2016-01-01
A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods. PMID:27547688
NASA Technical Reports Server (NTRS)
Antar, B. N.
1976-01-01
A numerical technique is presented for locating the eigenvalues of two point linear differential eigenvalue problems. The technique is designed to search for complex eigenvalues belonging to complex operators. With this method, any domain of the complex eigenvalue plane could be scanned and the eigenvalues within it, if any, located. For an application of the method, the eigenvalues of the Orr-Sommerfeld equation of the plane Poiseuille flow are determined within a specified portion of the c-plane. The eigenvalues for alpha = 1 and R = 10,000 are tabulated and compared for accuracy with existing solutions.
Modeling Granular Materials as Compressible Non-Linear Fluids: Heat Transfer Boundary Value Problems
Massoudi, M.C.; Tran, P.X.
2006-01-01
We discuss three boundary value problems in the flow and heat transfer analysis in flowing granular materials: (i) the flow down an inclined plane with radiation effects at the free surface; (ii) the natural convection flow between two heated vertical walls; (iii) the shearing motion between two horizontal flat plates with heat conduction. It is assumed that the material behaves like a continuum, similar to a compressible nonlinear fluid where the effects of density gradients are incorporated in the stress tensor. For a fully developed flow the equations are simplified to a system of three nonlinear ordinary differential equations. The equations are made dimensionless and a parametric study is performed where the effects of various dimensionless numbers representing the effects of heat conduction, viscous dissipation, radiation, and so forth are presented.
Samet Y. Kadioglu; Robert R. Nourgaliev; Vincent A. Mousseau
2008-03-01
We perform a comparative study for the harmonic versus arithmetic averaging of the heat conduction coefficient when solving non-linear heat transfer problems. In literature, the harmonic average is the method of choice, because it is widely believed that the harmonic average is more accurate model. However, our analysis reveals that this is not necessarily true. For instance, we show a case in which the harmonic average is less accurate when a coarser mesh is used. More importantly, we demonstrated that if the boundary layers are finely resolved, then the harmonic and arithmetic averaging techniques are identical in the truncation error sense. Our analysis further reveals that the accuracy of these two techniques depends on how the physical problem is modeled.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Horizon complementarity in elliptic de Sitter space
NASA Astrophysics Data System (ADS)
Hackl, Lucas; Neiman, Yasha
2015-02-01
We study a quantum field in elliptic de Sitter space dS4/Z2—the spacetime obtained from identifying antipodal points in dS4. We find that the operator algebra and Hilbert space cannot be defined for the entire space, but only for observable causal patches. This makes the system into an explicit realization of the horizon complementarity principle. In the absence of a global quantum theory, we propose a recipe for translating operators and states between observers. This translation involves information loss, in accordance with the fact that two observers see different patches of the spacetime. As a check, we recover the thermal state at the de Sitter temperature as a state that appears the same to all observers. This thermal state arises from the same functional that, in ordinary dS4, describes the Bunch-Davies vacuum.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-02-01
This paper presents a methodology to sample equivalence domain (ED) in non-linear PDE-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo (MCMC) algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of Magneotelluric, Controlled-source Electromagnetic (EM) and Global EM induction data.
NASA Astrophysics Data System (ADS)
Khan, Junaid Ali; Zahoor Raja, Muhammad Asif; Rashidi, Mohammad Mehdi; Syam, Muhammad Ibrahim; Majid Wazwaz, Abdul
2015-10-01
In this research, the well-known non-linear Lane-Emden-Fowler (LEF) equations are approximated by developing a nature-inspired stochastic computational intelligence algorithm. A trial solution of the model is formulated as an artificial feed-forward neural network model containing unknown adjustable parameters. From the LEF equation and its initial conditions, an energy function is constructed that is used in the algorithm for the optimisation of the networks in an unsupervised way. The proposed scheme is tested successfully by applying it on various test cases of initial value problems of LEF equations. The reliability and effectiveness of the scheme are validated through comprehensive statistical analysis. The obtained numerical results are in a good agreement with their corresponding exact solutions, which confirms the enhancement made by the proposed approach.
NASA Astrophysics Data System (ADS)
Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro
2016-01-01
It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter β and on the eccentricity e of the orbit. We consider only the circular case ( e = 0) but under the action of a broader family of singular potentials: α-homogeneous potentials, for α in (0, 2), and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter α and the mass parameter β, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Paraschivoiu, Marius
1998-01-01
We present a finite element technique for the efficient generation of lower and upper bounds to outputs which are linear functionals of the solutions to the incompressible Stokes equations in two space dimensions; the finite element discretization is effected by Crouzeix-Raviart elements, the discontinuous pressure approximation of which is central to our approach. The bounds are based upon the construction of an augmented Lagrangian: the objective is a quadratic "energy" reformulation of the desired output; the constraints are the finite element equilibrium equations (including the incompressibility constraint), and the intersubdomain continuity conditions on velocity. Appeal to the dual max-min problem for appropriately chosen candidate Lagrange multipliers then yields inexpensive bounds for the output associated with a fine-mesh discretization; the Lagrange multipliers are generated by exploiting an associated coarse-mesh approximation. In addition to the requisite coarse-mesh calculations, the bound technique requires solution only of local subdomain Stokes problems on the fine-mesh. The method is illustrated for the Stokes equations, in which the outputs of interest are the flowrate past, and the lift force on, a body immersed in a channel.
Jan Hesthaven
2012-02-06
Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted the project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.
A toy model of black hole complementarity
NASA Astrophysics Data System (ADS)
Banerjee, Souvik; Bryan, Jan-Willem; Papadodimas, Kyriakos; Raju, Suvrat
2016-05-01
We consider the algebra of simple operators defined in a time band in a CFT with a holographic dual. When the band is smaller than the light crossing time of AdS, an entire causal diamond in the center of AdS is separated from the band by a horizon. We show that this algebra obeys a version of the Reeh-Schlieder theorem: the action of the algebra on the CFT vacuum can approximate any low energy state in the CFT arbitrarily well, but no operator within the algebra can exactly annihilate the vacuum. We show how to relate local excitations in the complement of the central diamond to simple operators in the band. Local excitations within the diamond are invisible to the algebra of simple operators in the band by causality, but can be related to complicated operators called "precursors". We use the Reeh-Schlieder theorem to write down a simple and explicit formula for these precursors on the boundary. We comment on the implications of our results for black hole complementarity and the emergence of bulk locality from the boundary.
Bohr's Principle of Complementarity and Beyond
NASA Astrophysics Data System (ADS)
Jones, R.
2004-05-01
All knowledge is of an approximate character and always will be (Russell, Human Knowledge, 1948, pg 497,507). The laws of nature are not unique (Smolin, Three Roads to Quantum Gravity, 2001, pg 195). There may be a number of different sets of equations which describe our data just as well as the present known laws do (Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, Vol. 9, 1992, pg 319) In the future every field of intellectual study will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. The idea can be considered an extension of Bohr's notions of complementarity, "...different experimental arrangements.. described by different physical concepts...together and only together exhaust the definable information we can obtain about the object" (Folse, The Philosophy of Niels Bohr, 1985, pg 238). This idea is not postmodernism. Witchdoctor's theories will not form a part of medical science. Objective data, not human opinion, will decide which theories we use and how we weight their predictions.
Quark lepton complementarity and renormalization group effects
Schmidt, Michael A.; Smirnov, Alexei Yu.
2006-12-01
We consider a scenario for the quark-lepton complementarity relations between mixing angles in which the bimaximal mixing follows from the neutrino mass matrix. According to this scenario in the lowest order the angle {theta}{sub 12} is {approx}1{sigma} (1.5 degree sign -2 degree sign ) above the best fit point coinciding practically with the tribimaximal mixing prediction. Realization of this scenario in the context of the seesaw type-I mechanism with leptonic Dirac mass matrices approximately equal to the quark mass matrices is studied. We calculate the renormalization group corrections to {theta}{sub 12} as well as to {theta}{sub 13} in the standard model (SM) and minimal supersymmetric standard model (MSSM). We find that in a large part of the parameter space corrections {delta}{theta}{sub 12} are small or negligible. In the MSSM version of the scenario, the correction {delta}{theta}{sub 12} is in general positive. Small negative corrections appear in the case of an inverted mass hierarchy and opposite CP parities of {nu}{sub 1} and {nu}{sub 2} when leading contributions to {theta}{sub 12} running are strongly suppressed. The corrections are negative in the SM version in a large part of the parameter space for values of the relative CP phase of {nu}{sub 1} and {nu}{sub 2}: {phi}>{pi}/2.
NASA Technical Reports Server (NTRS)
Lee, Y. M.
1971-01-01
Using a linearized theory of thermally and mechanically interacting mixture of linear elastic solid and viscous fluid, we derive a fundamental relation in an integral form called a reciprocity relation. This reciprocity relation relates the solution of one initial-boundary value problem with a given set of initial and boundary data to the solution of a second initial-boundary value problem corresponding to a different initial and boundary data for a given interacting mixture. From this general integral relation, reciprocity relations are derived for a heat-conducting linear elastic solid, and for a heat-conducting viscous fluid. An initial-boundary value problem is posed and solved for the mixture of linear elastic solid and viscous fluid. With the aid of the Laplace transform and the contour integration, a real integral representation for the displacement of the solid constituent is obtained as one of the principal results of the analysis.
Addona, Davide
2015-08-15
We obtain weighted uniform estimates for the gradient of the solutions to a class of linear parabolic Cauchy problems with unbounded coefficients. Such estimates are then used to prove existence and uniqueness of the mild solution to a semi-linear backward parabolic Cauchy problem, where the differential equation is the Hamilton–Jacobi–Bellman equation of a suitable optimal control problem. Via backward stochastic differential equations, we show that the mild solution is indeed the value function of the controlled equation and that the feedback law is verified.
NASA Technical Reports Server (NTRS)
Hall, Philip
1989-01-01
Goertler vortices are thought to be the cause of transition in many fluid flows of practical importance. A review of the different stages of vortex growth is given. In the linear regime, nonparallel effects completely govern this growth, and parallel flow theories do not capture the essential features of the development of the vortices. A detailed comparison between the parallel and nonparallel theories is given and it is shown that at small vortex wavelengths, the parallel flow theories have some validity; otherwise nonparallel effects are dominant. New results for the receptivity problem for Goertler vortices are given; in particular vortices induced by free stream perturbations impinging on the leading edge of the walls are considered. It is found that the most dangerous mode of this type can be isolated and it's neutral curve is determined. This curve agrees very closely with the available experimental data. A discussion of the different regimes of growth of nonlinear vortices is also given. Again it is shown that, unless the vortex wavelength is small, nonparallel effects are dominant. Some new results for nonlinear vortices of 0(1) wavelengths are given and compared to experimental observations.
NASA Astrophysics Data System (ADS)
Monticelli, Dario D.; Rodney, Scott
2015-10-01
In this paper we study existence and spectral properties for weak solutions of Neumann and Dirichlet problems associated with second order linear degenerate elliptic partial differential operators X with rough coefficients, of the form X = - div (P∇) + HR +S‧ G + F, where the n × n matrix function P = P (x) is nonnegative definite and allowed to degenerate, R, S are families of subunit vector fields, G, H are vector valued functions and F is a scalar function. We operate in a geometric homogeneous space setting and we assume the validity of certain Sobolev and Poincaré inequalities related to a symmetric nonnegative definite matrix of weights Q = Q (x) that is comparable to P; we do not assume that the underlying measure is doubling. We give a maximum principle for weak solutions of Xu ≤ 0, and we follow this with a result describing a relationship between compact projection of the degenerate Sobolev space QH 1, p, related to the matrix of weights Q, into Lq and a Poincaré inequality with gain adapted to Q.
NASA Astrophysics Data System (ADS)
Shepherd, James J.; Scuseria, Gustavo E.; Spencer, James S.
2014-10-01
We investigate the sign problem for full configuration interaction quantum Monte Carlo (FCIQMC), a stochastic algorithm for finding the ground-state solution of the Schrödinger equation with substantially reduced computational cost compared with exact diagonalization. We find k -space Hubbard models for which the solution is yielded with storage that grows sublinearly in the size of the many-body Hilbert space, in spite of using a wave function that is simply a linear combination of states. The FCIQMC algorithm is able to find this sublinear scaling regime without bias and with only a choice of the Hamiltonian basis. By means of a demonstration we solve for the energy of a 70-site half-filled system (with a space of 1038 determinants) in 250 core hours, substantially quicker than the ˜1036 core hours that would be required by exact diagonalization. This is the largest space that has been sampled in an unbiased fashion. The challenge for the recently developed FCIQMC method is made clear: Expand the sublinear scaling regime while retaining exact-on-average accuracy. We comment upon the relationship between this and the scaling law previously observed in the initiator adaptation (i-FCIQMC). We argue that our results change the landscape for the development of FCIQMC and related methods.
NASA Astrophysics Data System (ADS)
Helman, E. Udi
This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using
NASA Astrophysics Data System (ADS)
Benzaouia, Abdellah; Ouladsine, Mustapha; Ananou, Bouchra
2014-10-01
In this paper, fault tolerant control problem for discrete-time switching systems with delay is studied. Sufficient conditions of building an observer are obtained by using multiple Lyapunov function. These conditions are worked out in a new way, using cone complementarity technique, to obtain new LMIs with slack variables and multiple weighted residual matrices. The obtained results are applied on a numerical example showing fault detection, localisation of fault and reconfiguration of the control to maintain asymptotic stability even in the presence of a permanent sensor fault.
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
Information complementarity in multipartite quantum states and security in cryptography
NASA Astrophysics Data System (ADS)
Bera, Anindita; Kumar, Asutosh; Rakshit, Debraj; Prabhu, R.; SenDe, Aditi; Sen, Ujjwal
2016-03-01
We derive complementarity relations for arbitrary quantum states of multiparty systems of any number of parties and dimensions between the purity of a part of the system and several correlation quantities, including entanglement and other quantum correlations as well as classical and total correlations, of that part with the remainder of the system. We subsequently use such a complementarity relation between purity and quantum mutual information in the tripartite scenario to provide a bound on the secret key rate for individual attacks on a quantum key distribution protocol.
ERIC Educational Resources Information Center
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
ERIC Educational Resources Information Center
Strickland, Tricia K.; Maccini, Paula
2013-01-01
We examined the effects of the Concrete-Representational-Abstract Integration strategy on the ability of secondary students with learning disabilities to multiply linear algebraic expressions embedded within contextualized area problems. A multiple-probe design across three participants was used. Results indicated that the integration of the…
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Couple Complementarity and Similarity: A Review of the Literature.
ERIC Educational Resources Information Center
White, Stephen G.; Hatcher, Chris
1984-01-01
Examines couple complementarity and similarity, and their relationship to dyadic adjustment, from three perspectives: social/psychological research, clinical populations research, and the observations of family therapists. Methodological criticisms are discussed suggesting that the evidence for a relationship between similarity and…
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
NASA Astrophysics Data System (ADS)
Kiani, Keivan; Nikkhoo, Ali
2012-02-01
This paper deals with the capabilities of linear and nonlinear beam theories in predicting the dynamic response of an elastically supported thin beam traversed by a moving mass. To this end, the discrete equations of motion are developed based on Lagrange's equations via reproducing kernel particle method (RKPM). For a particular case of a simply supported beam, Galerkin method is also employed to verify the results obtained by RKPM, and a reasonably good agreement is achieved. Variations of the maximum dynamic deflection and bending moment associated with the linear and nonlinear beam theories are investigated in terms of moving mass weight and velocity for various beam boundary conditions. It is demonstrated that for majority of the moving mass velocities, the differences between the results of linear and nonlinear analyses become remarkable as the moving mass weight increases, particularly for high levels of moving mass velocity. Except for the cantilever beam, the nonlinear beam theory predicts higher possibility of moving mass separation from the base beam compared to the linear one. Furthermore, the accuracy levels of the linear beam theory are determined for thin beams under large deflections and small rotations as a function of moving mass weight and velocity in various boundary conditions.
Graphing the Model or Modeling the Graph? Not-so-Subtle Problems in Linear IS-LM Analysis.
ERIC Educational Resources Information Center
Alston, Richard M.; Chi, Wan Fu
1989-01-01
Outlines the differences between the traditional and modern theoretical models of demand for money. States that the two models are often used interchangeably in textbooks, causing ambiguity. Argues against the use of linear specifications that imply that income velocity can increase without limit and that autonomous components of aggregate demand…
Sexual complementarity between host humoral toxicity and soldier caste in a polyembryonic wasp
Uka, Daisuke; Sakamoto, Takuma; Yoshimura, Jin; Iwabuchi, Kikuo
2016-01-01
Defense against enemies is a type of natural selection considered fundamentally equivalent between the sexes. In reality, however, whether males and females differ in defense strategy is unknown. Multiparasitism necessarily leads to the problem of defense for a parasite (parasitoid). The polyembryonic parasitic wasp Copidosoma floridanum is famous for its larval soldiers’ ability to kill other parasites. This wasp also exhibits sexual differences not only with regard to the competitive ability of the soldier caste but also with regard to host immune enhancement. Female soldiers are more aggressive than male soldiers, and their numbers increase upon invasion of the host by other parasites. In this report, in vivo and in vitro competition assays were used to test whether females have a toxic humoral factor; if so, then its strength was compared with that of males. We found that females have a toxic factor that is much weaker than that of males. Our results imply sexual complementarity between host humoral toxicity and larval soldiers. We discuss how this sexual complementarity guarantees adaptive advantages for both males and females despite the one-sided killing of male reproductives by larval female soldiers in a mixed-sex brood. PMID:27385149
Sexual complementarity between host humoral toxicity and soldier caste in a polyembryonic wasp.
Uka, Daisuke; Sakamoto, Takuma; Yoshimura, Jin; Iwabuchi, Kikuo
2016-01-01
Defense against enemies is a type of natural selection considered fundamentally equivalent between the sexes. In reality, however, whether males and females differ in defense strategy is unknown. Multiparasitism necessarily leads to the problem of defense for a parasite (parasitoid). The polyembryonic parasitic wasp Copidosoma floridanum is famous for its larval soldiers' ability to kill other parasites. This wasp also exhibits sexual differences not only with regard to the competitive ability of the soldier caste but also with regard to host immune enhancement. Female soldiers are more aggressive than male soldiers, and their numbers increase upon invasion of the host by other parasites. In this report, in vivo and in vitro competition assays were used to test whether females have a toxic humoral factor; if so, then its strength was compared with that of males. We found that females have a toxic factor that is much weaker than that of males. Our results imply sexual complementarity between host humoral toxicity and larval soldiers. We discuss how this sexual complementarity guarantees adaptive advantages for both males and females despite the one-sided killing of male reproductives by larval female soldiers in a mixed-sex brood. PMID:27385149
NASA Astrophysics Data System (ADS)
Turkin, Alexander; van Oijen, Antoine M.; Turkin, Anatoliy A.
2015-11-01
One-dimensional sliding along DNA as a means to accelerate protein target search is a well-known phenomenon occurring in various biological systems. Using a biomimetic approach, we have recently demonstrated the practical use of DNA-sliding peptides to speed up bimolecular reactions more than an order of magnitude by allowing the reactants to associate not only in the solution by three-dimensional (3D) diffusion, but also on DNA via one-dimensional (1D) diffusion [A. Turkin et al., Chem. Sci. (2015), 10.1039/C5SC03063C]. Here we present a mean-field kinetic model of a bimolecular reaction in a solution with linear extended sinks (e.g., DNA) that can intermittently trap molecules present in a solution. The model consists of chemical rate equations for mean concentrations of reacting species. Our model demonstrates that addition of linear traps to the solution can significantly accelerate reactant association. We show that at optimum concentrations of linear traps the 1D reaction pathway dominates in the kinetics of the bimolecular reaction; i.e., these 1D traps function as an assembly line of the reaction product. Moreover, we show that the association reaction on linear sinks between trapped reactants exhibits a nonclassical third-order behavior. Predictions of the model agree well with our experimental observations. Our model provides a general description of bimolecular reactions that are controlled by a combined 3D+1D mechanism and can be used to quantitatively describe both naturally occurring as well as biomimetic biochemical systems that reduce the dimensionality of search.
Saul Rosenzweig's purview: from experimenter/experimentee complementarity to idiodynamics.
Rosenzweig, Saul
2004-06-01
Following a brief personal biography, an exposition of Saul Rosenzweig's scientific contributions is presented. Starting in 1933 with experimenter/experimentee complementarity, this point of view was extended to implicit common factors in psychotherapy Rosenzweig (1936) then to the complementary pattern of the so-called schools of psychology Rosenzweig (1937). Similarly, converging approaches in personality theory emerged as another type of complementarity Rosenzweig (1944a). The three types of norms-nomothetic, demographic, and idiodynamic-within the range of dynamic human behavior were formulated and led to idiodynamics as a successor to personality theory. This formulation included the concept of the idioverse, defined as a self-creative and experiential population of events, which opened up a methodology (psychoarcheology) for reconstructing the creativity of outstanding scientific and artistic craftsmen like William James and Sigmund Freud, among psychologists, and Henry James, Herman Melville, and Nathaniel Hawthorne among writers of fiction. PMID:15151802
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
NASA Astrophysics Data System (ADS)
Tanaka, Hidefumi; Yamamoto, Yuhji
2016-05-01
Palaeointensity experiments were carried out to a sample collection from two sections of basalt lava flow sequences of Pliocene age in north central Iceland (Chron C2An) to further refine the knowledge of the behaviour of the palaeomagnetic field. Selection of samples was mainly based on their stability of remanence to thermal demagnetization as well as good reversibility in variations of magnetic susceptibility and saturation magnetization with temperature, which would indicate the presence of magnetite as a product of deuteric oxidation of titanomagnetite. Among 167 lava flows from two sections, 44 flows were selected for the Königsberger-Thellier-Thellier experiment in vacuum. In spite of careful pre-selection of samples, an Arai plot with two linear segments, or a concave-up appearance, was often encountered during the experiments. This non-ideal behaviour was probably caused by an irreversible change in the domain state of the magnetic grains of the pseudo-single-domain (PSD) range. This is assumed because an ideal linear plot was obtained in the second run of the palaeointensity experiment in which a laboratory thermoremanence acquired after the final step of the first run was used as a natural remanence. This experiment was conducted on six selected samples, and no clear difference between the magnetic grains of the experimented and pristine sister samples was found by scanning electron microscope and hysteresis measurements, that is, no occurrence of notable chemical/mineralogical alteration, suggesting that no change in the grain size distribution had occurred. Hence, the two-segment Arai plot was not caused by the reversible multidomain/PSD effect in which the curvature of the Arai plot is dependent on the grain size. Considering that the irreversible change in domain state must have affected data points at not only high temperatures but also low temperatures, fv ≥ 0.5 was adopted as one of the acceptance criteria where fv is a vectorially defined
Saptio-temporal complementarity of wind and solar power in India
NASA Astrophysics Data System (ADS)
Lolla, Savita; Baidya Roy, Somnath; Chowdhury, Sourangshu
2015-04-01
Wind and solar power are likely to be a part of the solution to the climate change problem. That is why they feature prominently in the energy policies of all industrial economies including India. One of the major hindrances that is preventing an explosive growth of wind and solar energy is the issue of intermittency. This is a major problem because in a rapidly moving economy, energy production must match the patterns of energy demand. Moreover, sudden increase and decrease in energy supply may destabilize the power grids leading to disruptions in power supply. In this work we explore if the patterns of variability in wind and solar energy availability can offset each other so that a constant supply can be guaranteed. As a first step, this work focuses on seasonal-scale variability for each of the 5 regional power transmission grids in India. Communication within each grid is better than communication between grids. Hence, it is assumed that the grids can switch sources relatively easily. Wind and solar resources are estimated using the MERRA Reanalysis data for the 1979-2013 period. Solar resources are calculated with a 20% conversion efficiency. Wind resources are estimated using a 2 MW turbine power curve. Total resources are obtained by optimizing location and number of wind/solar energy farms. Preliminary results show that the southern and western grids are more appropriate for cogeneration than the other grids. Many studies on wind-solar cogeneration have focused on temporal complementarity at local scale. However, this is one of the first studies to explore spatial complementarity over regional scales. This project may help accelerate renewable energy penetration in India by identifying regional grid(s) where the renewable energy intermittency problem can be minimized.
NASA Technical Reports Server (NTRS)
Wong, P. K.
1975-01-01
The closely-related problems of designing reliable feedback stabilization strategy and coordinating decentralized feedbacks are considered. Two approaches are taken. A geometric characterization of the structure of control interaction (and its dual) was first attempted and a concept of structural homomorphism developed based on the idea of 'similarity' of interaction pattern. The idea of finding classes of individual feedback maps that do not 'interfere' with the stabilizing action of each other was developed by identifying the structural properties of nondestabilizing and LQ-optimal feedback maps. Some known stability properties of LQ-feedback were generalized and some partial solutions were provided to the reliable stabilization and decentralized feedback coordination problems. A concept of coordination parametrization was introduced, and a scheme for classifying different modes of decentralization (information, control law computation, on-line control implementation) in control systems was developed.
NASA Technical Reports Server (NTRS)
Bensoussan, A.; Delfour, M. C.; Mitter, S. K.
1976-01-01
Available published results are surveyed for a special class of infinite-dimensional control systems whose evolution is characterized by a semigroup of operators of class C subscript zero. Emphasis is placed on an approach that clarifies the system-theoretic relationship among controllability, stabilizability, stability, and the existence of a solution to an associated operator equation of the Riccati type. Formulation of the optimal control problem is reviewed along with the asymptotic behavior of solutions to a general system of equations and several theorems concerning L2 stability. Examples are briefly discussed which involve second-order parabolic systems, first-order hyperbolic systems, and distributed boundary control.
Cameron, M.K.; Fomel, S.B.; Sethian, J.A.
2009-01-01
In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approach is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.
Hauck, Cory D; Alldredge, Graham; Tits, Andre
2012-01-01
We present a numerical algorithm to implement entropy-based (M{sub N}) moment models in the context of a simple, linear kinetic equation for particles moving through a material slab. The closure for these models - as is the case for all entropy-based models - is derived through the solution of constrained, convex optimization problem. The algorithm has two components. The first component is a discretization of the moment equations which preserves the set of realizable moments, thereby ensuring that the optimization problem has a solution (in exact arithmetic). The discretization is a second-order kinetic scheme which uses MUSCL-type limiting in space and a strong-stability-preserving, Runge-Kutta time integrator. The second component of the algorithm is a Newton-based solver for the dual optimization problem, which uses an adaptive quadrature to evaluate integrals in the dual objective and its derivatives. The accuracy of the numerical solution to the dual problem plays a key role in the time step restriction for the kinetic scheme. We study in detail the difficulties in the dual problem that arise near the boundary of realizable moments, where quadrature formulas are less reliable and the Hessian of the dual objection function is highly ill-conditioned. Extensive numerical experiments are performed to illustrate these difficulties. In cases where the dual problem becomes 'too difficult' to solve numerically, we propose a regularization technique to artificially move moments away from the realizable boundary in a way that still preserves local particle concentrations. We present results of numerical simulations for two challenging test problems in order to quantify the characteristics of the optimization solver and to investigate when and how frequently the regularization is needed.
Complementarity of the maldacena and randall-sundrum pictures
Duff; Liu
2000-09-01
We revive an old result, that one-loop corrections to the graviton propagator induce 1/r(3) corrections to the Newtonian gravitational potential, and compute the coefficient due to closed loops of the U(N) N = 4 super-Yang-Mills theory that arises in Maldacena's anti-de Sitter conformal field theory correspondence. We find exact agreement with the coefficient appearing in the Randall-Sundrum brane-world proposal. This provides more evidence for the complementarity of the two pictures. PMID:10970461
The methodological lesson of complementarity: Bohr’s naturalistic epistemology
NASA Astrophysics Data System (ADS)
Folse, H. J.
2014-12-01
Bohr’s intellectual journey began with the recognition that empirical phenomena implied the breakdown of classical mechanics in the atomic domain; this, in turn, led to his adoption of the ‘quantum postulate’ that justifies the ‘stationary states’ of his atomic model of 1913. His endeavor to develop a wider conceptual framework harmonizing both classical and quantum descriptions led to his proposal of the new methodological goals and standards of complementarity. Bohr’s claim that an empirical discovery can demand methodological revision justifies regarding his epistemological lesson as supporting a naturalistic epistemology.
Solving MPCC Problem with the Hyperbolic Penalty Function
NASA Astrophysics Data System (ADS)
Melo, Teófilo; Monteiro, M. Teresa T.; Matias, João
2011-09-01
The main goal of this work is to solve mathematical program with complementarity constraints (MPCC) using nonlinear programming techniques (NLP). An hyperbolic penalty function is used to solve MPCC problems by including the complementarity constraints in the penalty term. This penalty function [1] is twice continuously differentiable and combines features of both exterior and interior penalty methods. A set of AMPL problems from MacMPEC [2] are tested and a comparative study is performed.
Indivisibility, Complementarity and Ontology: A Bohrian Interpretation of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Roldán-Charria, Jairo
2014-12-01
The interpretation of quantum mechanics presented in this paper is inspired by two ideas that are fundamental in Bohr's writings: indivisibility and complementarity. Further basic assumptions of the proposed interpretation are completeness, universality and conceptual economy. In the interpretation, decoherence plays a fundamental role for the understanding of measurement. A general and precise conception of complementarity is proposed. It is fundamental in this interpretation to make a distinction between ontological reality, constituted by everything that does not depend at all on the collectivity of human beings, nor on their decisions or limitations, nor on their existence, and empirical reality constituted by everything that not being ontological is, however, intersubjective. According to the proposed interpretation, neither the dynamical properties, nor the constitutive properties of microsystems like mass, charge and spin, are ontological. The properties of macroscopic systems and space-time are also considered to belong to empirical reality. The acceptance of the above mentioned conclusion does not imply a total rejection of the notion of ontological reality. In the paper, utilizing the Aristotelian ideas of general cause and potentiality, a relation between ontological reality and empirical reality is proposed. Some glimpses of ontological reality, in the form of what can be said about it, are finally presented.
Products of weak values: Uncertainty relations, complementarity, and incompatibility
NASA Astrophysics Data System (ADS)
Hall, Michael J. W.; Pati, Arun Kumar; Wu, Junde
2016-05-01
The products of weak values of quantum observables are shown to be of value in deriving quantum uncertainty and complementarity relations, for both weak and strong measurement statistics. First, a "product representation formula" allows the standard Heisenberg uncertainty relation to be derived from a classical uncertainty relation for complex random variables. We show this formula also leads to strong uncertainty relations for unitary operators and underlies an interpretation of weak values as optimal (complex) estimates of quantum observables. Furthermore, we show that two incompatible observables that are weakly and strongly measured in a weak measurement context obey a complementarity relation under the interchange of these observables, in the form of an upper bound on the product of the corresponding weak values. Moreover, general tradeoff relations between weak purity, quantum purity, and quantum incompatibility, and also between weak and strong joint probability distributions, are obtained based on products of real and imaginary components of weak values, where these relations quantify the degree to which weak probabilities can take anomalous values in a given context.
ERIC Educational Resources Information Center
Laird, Heather; Vande Kemp, Hendrika
1987-01-01
Explored the level of family therapist complementarity in the early, middle and late stages of therapy performing a micro-analysis of Salvador Minuchin with one family in successful therapy. Level of therapist complementarity was signficantly greater in the early and late stages than in the middle stage, and was significantly correlated with…
ERIC Educational Resources Information Center
Rothe, J. Peter
This article focuses on the linkage between the quantitative and qualitative distance education research methods. The concept that serves as the conceptual link is termed "complementarity." The definition of complementarity emerges through a simulated study of FernUniversitat's mentors. The study shows that in the case of the mentors, educational…
Interpersonal Complementarity in the Mental Health Intake: A Mixed-Methods Study
ERIC Educational Resources Information Center
Rosen, Daniel C.; Miller, Alisa B.; Nakash, Ora; Halperin, Lucila; Alegria, Margarita
2012-01-01
The study examined which socio-demographic differences between clients and providers influenced interpersonal complementarity during an initial intake session; that is, behaviors that facilitate harmonious interactions between client and provider. Complementarity was assessed using blinded ratings of 114 videotaped intake sessions by trained…
NASA Astrophysics Data System (ADS)
Ke, Hong-Wei; Liu, Tan; Li, Xue-Qian
2014-09-01
It is suggested that there is an underlying symmetry which relates the quark and lepton sectors. Namely, among the mixing matrix elements of Cabibbo-Kobayashi-Maskawa for quarks and Pontecorvo-Maki-Nakawaga-Sakata for leptons there exist complementarity relations at a high energy scale (such as the seesaw or even the grand unification theory scales). We assume that the relations would remain during the matrix elements running down to the electroweak scale. Observable breaking of the rational relation is attributed to the existence of sterile neutrinos that mix with the active neutrino to result in the observable Pontecorvo-Maki-Nakawaga-Sakata matrix. We show that involvement of a sterile in the (3+1) model induces that |Ue4|2=0.040, |Uμ4|2=0.009, and sin22α =0.067. We also find a new self-complementarity ϑ12+ϑ23+ϑ13+α ≈90°. The numbers are generally consistent with those obtained by fitting recent measurements, especially in this scenario, to the existence of a sterile neutrino that does not upset the LEP data; i.e., the number of neutrino types is very close to 3.
Munro, P D; Jackson, C M; Winzor, D J
1998-04-20
Attention is drawn to a need for caution in the thermodynamic characterization of nonspecific binding of a large ligand to a linear acceptor such as a polynucleotide or a polysaccharide-because of the potential for misidentification of a transient (pseudoequilibrium) state as true equilibrium. The time course of equilibrium attainment during the binding of a large ligand to nonspecific three-residue sequences of a linear acceptor lattice has been simulated, either by numerical integration of the system of ordinary differential equations or by a Monte Carlo procedure, to identify the circumstances under which the kinetics of elimination of suboptimal ligand attachment (called the parking problem) create such difficulties. These simulations have demonstrated that the potential for the existence of a transient plateau in the time course of equilibrium attainment increases greatly (i) with increasing extent of acceptor saturation (i.e., with increasing ligand concentration), (ii) with increasing magnitude of the binding constant, and (iii) with increasing length of the acceptor lattice. Because the capacity of the polymer lattice for ligand is most readily determined under conditions conducive to essentially stoichiometric interaction, the parameter so obtained is thus likely to reflect the transient (irreversible) rather than equilibrium binding capacity. A procedure is described for evaluating the equilibrium capacity from that irreversible parameter; and illustrated by application to published results [M. Nesheim, M.N. Blackburn, C.M. Lawler, K.G. Mann, J. Biol. Chem. 261 (1986) 3214-3221] for the stoichiometric titration of heparin with thrombin. PMID:17029698
NASA Astrophysics Data System (ADS)
Chyba, David Edward
This dissertation presents new results for the steady states of a detuned ring laser with a saturable absorber. The treatment is based on a semiclassical model which assumes homogeneously broadened two-level atoms. Part 1 presents a solution of the Maxwell-Bloch equations for the longitudinal dependence of the steady states of this system. The solution is then simplified by use of the mean field approximation. Graphical results in the mean field approximation are presented for squared electric field versus operating frequency, and for each of these versus cavity tuning and laser excitation. Various cavity linewidths and both resonant and non-resonant amplifier and absorber line center frequencies are considered. The most notable finding is that cavity detuning breaks the degeneracies previously found in the steady state solutions to the fully tuned case. This lead to the prediction that an actual system will bifurcate from the zero intensity solution to a steady state solution as laser excitation increases from zero, rather than to the small amplitude pulsations found for the model with mathematically exact tuning of the cavity and the media line centers. Other phenomena suggested by the steady state results include tuning-dependent hysteresis and bistability, and instability due to the appearance of another steady state solution. Results for the case in which the media have different line center frequencies suggest non-monotonic behavior of the electric field amplitude as laser excitation varies, as well as hysteresis and bistability. Part 2 presents a formulation of the linearized stability problem for the steady state solutions discussed in the first part. Thus the effects of detuning and the other parameters describing the system is incorporated into the stability analysis. The equations of the system are linearized about both the mean field steady states and about the longitudinally dependent steady states. Expansion in Fourier spatial modes is used in the
Interference and complementarity for two-photon hybrid entangled states
Nogueira, W. A. T.; Santibanez, M.; Delgado, A.; Saavedra, C.; Neves, L.; Lima, G.; Padua, S.
2010-10-15
In this work we generate two-photon hybrid entangled states (HESs), where the polarization of one photon is entangled with the transverse spatial degree of freedom of the second photon. The photon pair is created by parametric down-conversion in a polarization-entangled state. A birefringent double-slit couples the polarization and spatial degrees of freedom of these photons, and finally, suitable spatial and polarization projections generate the HES. We investigate some interesting aspects of the two-photon hybrid interference and present this study in the context of the complementarity relation that exists between the visibility of the one-photon and that of the two-photon interference patterns.
Complementarity of Neutrinoless Double Beta Decay and Cosmology
Dodelson, Scott; Lykken, Joseph
2014-03-20
Neutrinoless double beta decay experiments constrain one combination of neutrino parameters, while cosmic surveys constrain another. This complementarity opens up an exciting range of possibilities. If neutrinos are Majorana particles, and the neutrino masses follow an inverted hierarchy, then the upcoming sets of both experiments will detect signals. The combined constraints will pin down not only the neutrino masses but also constrain one of the Majorana phases. If the hierarchy is normal, then a beta decay detection with the upcoming generation of experiments is unlikely, but cosmic surveys could constrain the sum of the masses to be relatively heavy, thereby producing a lower bound for the neutrinoless double beta decay rate, and therefore an argument for a next generation beta decay experiment. In this case as well, a combination of the phases will be constrained.
Complementarity of quantum discord and classically accessible information
Zwolak, Michael P.; Zurek, Wojciech H.
2013-05-20
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. Itmore » shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.« less
Complementarity of quantum discord and classically accessible information
Zwolak, Michael P.; Zurek, Wojciech H.
2013-05-20
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. It shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.
Maximally coherent mixed states: Complementarity between maximal coherence and mixedness
NASA Astrophysics Data System (ADS)
Singh, Uttam; Bera, Manabendra Nath; Dhar, Himadri Shekhar; Pati, Arun Kumar
2015-05-01
Quantum coherence is a key element in topical research on quantum resource theories and a primary facilitator for design and implementation of quantum technologies. However, the resourcefulness of quantum coherence is severely restricted by environmental noise, which is indicated by the loss of information in a quantum system, measured in terms of its purity. In this work, we derive the limits imposed by the mixedness of a quantum system on the amount of quantum coherence that it can possess. We obtain an analytical trade-off between the two quantities that upperbound the maximum quantum coherence for fixed mixedness in a system. This gives rise to a class of quantum states, "maximally coherent mixed states," whose coherence cannot be increased further under any purity-preserving operation. For the above class of states, quantum coherence and mixedness satisfy a complementarity relation, which is crucial to understand the interplay between a resource and noise in open quantum systems.
Positive effects of neighborhood complementarity on tree growth in a Neotropical forest.
Chen, Yuxin; Wright, S Joseph; Muller-Landau, Helene C; Hubbell, Stephen P; Wang, Yongfan; Yu, Shixiao
2016-03-01
Numerous grassland experiments have found evidence for a complementarity effect, an increase in productivity with higher plant species richness due to niche partitioning. However, empirical tests of complementarity in natural forests are rare. We conducted a spatially explicit analysis of 518 433 growth records for 274 species from a 50-ha tropical forest plot to test neighborhood complementarity, the idea that a tree grows faster when it is surrounded by more dissimilar neighbors. We found evidence for complementarity: focal tree growth rates increased by 39.8% and 34.2% with a doubling of neighborhood multi-trait dissimilarity and phylogenetic dissimilarity, respectively. Dissimilarity from neighbors in maximum height had the most important effect on tree growth among the six traits examined, and indeed, its effect trended much larger than that of the multitrait dissimilarity index. Neighborhood complementarity effects were strongest for light-demanding species, and decreased in importance with increasing shade tolerance of the focal individuals. Simulations demonstrated that the observed neighborhood complementarities were sufficient to produce positive stand-level biodiversity-productivity relationships. We conclude that neighborhood complementarity is important for productivity in this tropical forest, and that scaling down to individual-level processes can advance our understanding of the mechanisms underlying stand-level biodiversity-productivity relationships. PMID:27197403
Black hole complementarity with local horizons and Horowitz-Maldacena's proposal
NASA Astrophysics Data System (ADS)
Hong, Sungwook E.; Hwang, Dong-il; Yeom, Dong-han; Zoe, Heeseung
2008-12-01
To implement the consistent black hole complementarity principle, we need two assumptions: first, there exists a singularity near the center, and second, global horizons are the same as local horizons. However, these assumptions are not true in general. In this paper, the authors study a charged black hole in which the second assumption may not hold. From the previous simulations, we have argued that the event horizon is quite close to the outer horizon, and it seems not harmful to black hole complementarity; however, the Cauchy horizon can be different from the inner horizon, and a violation of complementarity will be possible. To maintain complementarity, we need to assume a selection principle between the singularity and the Hawking radiation generating surface; we suggest that Horowitz-Maldacena's proposal can be useful for this purpose. Finally, we discussed some conditions under which the selection principle may not work.
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
Complementarity and Area-Efficiency in the Prioritization of the Global Protected Area Network
Kullberg, Peter; Toivonen, Tuuli; Montesino Pouzols, Federico; Lehtomäki, Joona; Di Minin, Enrico; Moilanen, Atte
2015-01-01
Complementarity and cost-efficiency are widely used principles for protected area network design. Despite the wide use and robust theoretical underpinnings, their effects on the performance and patterns of priority areas are rarely studied in detail. Here we compare two approaches for identifying the management priority areas inside the global protected area network: 1) a scoring-based approach, used in recently published analysis and 2) a spatial prioritization method, which accounts for complementarity and area-efficiency. Using the same IUCN species distribution data the complementarity method found an equal-area set of priority areas with double the mean species ranges covered compared to the scoring-based approach. The complementarity set also had 72% more species with full ranges covered, and lacked any coverage only for half of the species compared to the scoring approach. Protected areas in our complementarity-based solution were on average smaller and geographically more scattered. The large difference between the two solutions highlights the need for critical thinking about the selected prioritization method. According to our analysis, accounting for complementarity and area-efficiency can lead to considerable improvements when setting management priorities for the global protected area network. PMID:26678497
Complementarity and Area-Efficiency in the Prioritization of the Global Protected Area Network.
Kullberg, Peter; Toivonen, Tuuli; Montesino Pouzols, Federico; Lehtomäki, Joona; Di Minin, Enrico; Moilanen, Atte
2015-01-01
Complementarity and cost-efficiency are widely used principles for protected area network design. Despite the wide use and robust theoretical underpinnings, their effects on the performance and patterns of priority areas are rarely studied in detail. Here we compare two approaches for identifying the management priority areas inside the global protected area network: 1) a scoring-based approach, used in recently published analysis and 2) a spatial prioritization method, which accounts for complementarity and area-efficiency. Using the same IUCN species distribution data the complementarity method found an equal-area set of priority areas with double the mean species ranges covered compared to the scoring-based approach. The complementarity set also had 72% more species with full ranges covered, and lacked any coverage only for half of the species compared to the scoring approach. Protected areas in our complementarity-based solution were on average smaller and geographically more scattered. The large difference between the two solutions highlights the need for critical thinking about the selected prioritization method. According to our analysis, accounting for complementarity and area-efficiency can lead to considerable improvements when setting management priorities for the global protected area network. PMID:26678497
Rapid online analysis of local feature detectors and their complementarity.
Ehsan, Shoaib; Clark, Adrian F; McDonald-Maier, Klaus D
2013-01-01
A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications. PMID:23966187
Complementarity reveals bound entanglement of two twisted photons
NASA Astrophysics Data System (ADS)
Hiesmayr, Beatrix C.; Löffler, Wolfgang
2013-08-01
We demonstrate the detection of bipartite bound entanglement as predicted by the Horodecki's in 1998. Bound entangled states, being heavily mixed entangled quantum states, can be produced by incoherent addition of pure entangled states. Until 1998 it was thought that such mixing could always be reversed by entanglement distillation; however, this turned out to be impossible for bound entangled states. The purest form of bound entanglement is that of only two particles, which requires higher-dimensional (d > 2) quantum systems. We realize this using photon qutrit (d = 3) pairs produced by spontaneous parametric downconversion, that are entangled in the orbital angular momentum degrees of freedom, which is scalable to high dimensions. Entanglement of the photons is confirmed via a ‘maximum complementarity protocol’. This conceptually simple protocol requires only maximized complementary of measurement bases; we show that it can also detect bound entanglement. We explore the bipartite qutrit space and find that, also experimentally, a significant portion of the entangled states are actually bound entangled.
Rapid Online Analysis of Local Feature Detectors and Their Complementarity
Ehsan, Shoaib; Clark, Adrian F.; McDonald-Maier, Klaus D.
2013-01-01
A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications. PMID:23966187
Complementarity of dark matter searches in the phenomenological MSSM
Cahill-Rowley, Matthew; Cotta, Randy; Drlica-Wagner, Alex; Funk, Stefan; Hewett, JoAnne; Ismail, Ahmed; Rizzo, Tom; Wood, Matthew
2015-03-11
As is well known, the search for and eventual identification of dark matter in supersymmetry requires a simultaneous, multipronged approach with important roles played by the LHC as well as both direct and indirect dark matter detection experiments. We examine the capabilities of these approaches in the 19-parameter phenomenological MSSM which provides a general framework for complementarity studies of neutralino dark matter. We summarize the sensitivity of dark matter searches at the 7 and 8 (and eventually 14) TeV LHC, combined with those by Fermi, CTA, IceCube/DeepCore, COUPP, LZ and XENON. The strengths and weaknesses of each of these techniques are examined and contrasted and their interdependent roles in covering the model parameter space are discussed in detail. We find that these approaches explore orthogonal territory and that advances in each are necessary to cover the supersymmetric weakly interacting massive particle parameter space. We also find that different experiments have widely varying sensitivities to the various dark matter annihilation mechanisms, some of which would be completely excluded by null results from these experiments.
NASA Astrophysics Data System (ADS)
Giovannacci, D.; Detalle, V.; Martos-Levif, D.; Ogien, J.; Bernikola, E.; Tornari, V.; Hatzigiannakis, K.; Mouhoubi, K.; Bodnar, J.-L.; Walker, G.-C.; Brissaud, D.; Trichereau, B.; Jackson, B.; Bowen, J.
2015-06-01
The abbey's church of Chaalis, in the North of Paris, was founded by Louis VI as a Cistercian monastery on 10th January 1137. In 2013, in the frame the European Commission's 7th Framework Program project CHARISMA [grant agreement no. 228330] the chapel was used as a practical case-study for application of the work done in a task devoted to best practices in historical buildings and monuments. In the chapel, three areas were identified as relevant. The first area was used to make an exercise on diagnosis of the different deterioration patterns. The second area was used to analyze a restored area. The third one was selected to test some hypotheses on the possibility of using the portable instruments to answer some questions related to the deterioration problems. To inspect this area, different tools were used: -Visible fluorescence under UV, - THz system, - Stimulated Infra-Red Thermography, SIRT - Digital Holographic Speckle Pattern Interferometry, DHSPI - Condition report by conservator-restorer. The complementarity and synergy offered by the profitable use of the different integrated tools is clearly shown in this practical exercise.
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of
Accounting for complementarity to maximize monitoring power for species management.
Tulloch, Ayesha I T; Chadès, Iadine; Possingham, Hugh P
2013-10-01
To choose among conservation actions that may benefit many species, managers need to monitor the consequences of those actions. Decisions about which species to monitor from a suite of different species being managed are hindered by natural variability in populations and uncertainty in several factors: the ability of the monitoring to detect a change, the likelihood of the management action being successful for a species, and how representative species are of one another. However, the literature provides little guidance about how to account for these uncertainties when deciding which species to monitor to determine whether the management actions are delivering outcomes. We devised an approach that applies decision science and selects the best complementary suite of species to monitor to meet specific conservation objectives. We created an index for indicator selection that accounts for the likelihood of successfully detecting a real trend due to a management action and whether that signal provides information about other species. We illustrated the benefit of our approach by analyzing a monitoring program for invasive predator management aimed at recovering 14 native Australian mammals of conservation concern. Our method selected the species that provided more monitoring power at lower cost relative to the current strategy and traditional approaches that consider only a subset of the important considerations. Our benefit function accounted for natural variability in species growth rates, uncertainty in the responses of species to the prescribed action, and how well species represent others. Monitoring programs that ignore uncertainty, likelihood of detecting change, and complementarity between species will be more costly and less efficient and may waste funding that could otherwise be used for management. PMID:24073812
Self-Complementarity within Proteins: Bridging the Gap between Binding and Folding
Basu, Sankar; Bhattacharyya, Dhananjay; Banerjee, Rahul
2012-01-01
Complementarity, in terms of both shape and electrostatic potential, has been quantitatively estimated at protein-protein interfaces and used extensively to predict the specific geometry of association between interacting proteins. In this work, we attempted to place both binding and folding on a common conceptual platform based on complementarity. To that end, we estimated (for the first time to our knowledge) electrostatic complementarity (Em) for residues buried within proteins. Em measures the correlation of surface electrostatic potential at protein interiors. The results show fairly uniform and significant values for all amino acids. Interestingly, hydrophobic side chains also attain appreciable complementarity primarily due to the trajectory of the main chain. Previous work from our laboratory characterized the surface (or shape) complementarity (Sm) of interior residues, and both of these measures have now been combined to derive two scoring functions to identify the native fold amid a set of decoys. These scoring functions are somewhat similar to functions that discriminate among multiple solutions in a protein-protein docking exercise. The performances of both of these functions on state-of-the-art databases were comparable if not better than most currently available scoring functions. Thus, analogously to interfacial residues of protein chains associated (docked) with specific geometry, amino acids found in the native interior have to satisfy fairly stringent constraints in terms of both Sm and Em. The functions were also found to be useful for correctly identifying the same fold for two sequences with low sequence identity. Finally, inspired by the Ramachandran plot, we developed a plot of Sm versus Em (referred to as the complementarity plot) that identifies residues with suboptimal packing and electrostatics which appear to be correlated to coordinate errors. PMID:22713576
A methodology to quantify and optimize time complementarity between hydropower and solar PV systems
NASA Astrophysics Data System (ADS)
Kougias, Ioannis; Szabó, Sándor; Monforti-Ferrario, Fabio; Huld, Thomas; Bódis, Katalin
2016-04-01
Hydropower and solar energy are expected to play a major role in achieving renewable energy sources' (RES) penetration targets. However, the integration of RES in the energy mix needs to overcome the technical challenges that are related to grid's operation. Therefore, there is an increasing need to explore approaches where different RES will operate under a synergetic approach. Ideally, hydropower and solar PV systems can be jointly developed in such systems where their electricity output profiles complement each other as much as possible and minimize the need for reserve capacities and storage costs. A straightforward way to achieve that is by optimizing the complementarity among RES systems both over time and spatially. The present research developed a methodology that quantifies the degree of time complementarity between small-scale hydropower stations and solar PV systems and examines ways to increase it. The methodology analyses high-resolution spatial and temporal data for solar radiation obtained from the existing PVGIS model (available online at: http://re.jrc.ec.europa.eu/pvgis/) and associates it with hydrological information of water inflows to a hydropower station. It builds on an exhaustive optimization algorithm that tests possible alterations of the PV system installation (azimuth, tilt) aiming to increase the complementarity, with minor compromises in the total solar energy output. The methodology has been tested in several case studies and the results indicated variations among regions and different hydraulic regimes. In some cases a small compromise in the solar energy output showed significant increases of the complementarity, while in other cases the effect is not that strong. Our contribution aims to present these findings in detail and initiate a discussion on the role and gains of increased complementarity between solar and hydropower energies. Reference: Kougias I, Szabó S, Monforti-Ferrario F, Huld T, Bódis K (2016). A methodology for
ERIC Educational Resources Information Center
Roorda, Debora L.; Koomen, Helma M. Y.; Spilt, Jantine L.; Thijs, Jochem T.; Oort, Frans J.
2013-01-01
The present study investigated whether the complementarity principle (mutual interactive behaviors are opposite on control and similar on affiliation) applies to teacher-child interactions within the kindergarten classroom. Furthermore, it was examined whether interactive behaviors and complementarity depended on children's externalizing and…
NASA Astrophysics Data System (ADS)
Mohanty, R. K.; Talwar, Jyoti; Khosla, Noopur
2012-05-01
We report the application of two parameter alternating group explicit (TAGE) iteration and Newton-TAGE iteration methods for the solution of nonlinear differential equation u″=F(x, u, u‧) subject to linear mixed boundary conditions on a non-uniform mesh. In both cases, we use only three non-uniform grid points. The convergence theory for TAGE iteration method is analyzed. Numerical examples are considered to demonstrate computationally and the utility of TAGE iteration methods.
Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal
Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han E-mail: bhl@sogang.ac.kr
2013-01-01
In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigate the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.
The Development of Working Memory: Exploring the Complementarity of Two Models.
ERIC Educational Resources Information Center
Kemps, Eva; De Rammelaere, Stijn; Desmet, Timothy
2000-01-01
Assessed 5-, 6-, 8- and 9-year-olds on two working memory tasks to explore the complementarity of working memory models postulated by Pascual-Leone and Baddeley. Pascual-Leone's theory offered a clear explanation of the results concerning central aspects of working memory. Baddeley's model provided a convincing account of findings regarding the…
ERIC Educational Resources Information Center
O'Toole, John; Dunn, Julie
2008-01-01
This article reports the findings of a research project that saw researchers from interaction design and drama education come together with a group of eleven and twelve year olds to investigate the current and future complementarity of computers and live classroom drama. The project was part of a pilot feasibility study commissioned by the…
Linkage Rules for Plant–Pollinator Networks: Trait Complementarity or Exploitation Barriers?
Santamaría, Luis; Rodríguez-Gironés, Miguel A
2007-01-01
Recent attempts to examine the biological processes responsible for the general characteristics of mutualistic networks focus on two types of explanations: nonmatching biological attributes of species that prevent the occurrence of certain interactions (“forbidden links”), arising from trait complementarity in mutualist networks (as compared to barriers to exploitation in antagonistic ones), and random interactions among individuals that are proportional to their abundances in the observed community (“neutrality hypothesis”). We explored the consequences that simple linkage rules based on the first two hypotheses (complementarity of traits versus barriers to exploitation) had on the topology of plant–pollination networks. Independent of the linkage rules used, the inclusion of a small set of traits (two to four) sufficed to account for the complex topological patterns observed in real-world networks. Optimal performance was achieved by a “mixed model” that combined rules that link plants and pollinators whose trait ranges overlap (“complementarity models”) and rules that link pollinators to flowers whose traits are below a pollinator-specific barrier value (“barrier models”). Deterrence of floral parasites (barrier model) is therefore at least as important as increasing pollination efficiency (complementarity model) in the evolutionary shaping of plant–pollinator networks. PMID:17253905
Complementarity of Galilean and Lorentz groups in the electrodynamics of inertially moving media
NASA Astrophysics Data System (ADS)
Barykin, V. N.
1989-09-01
A physical interpretation is given for the previously discovered ambiguity in the material equations of the electrodynamics of isotropic, inertially moving media. This ambiguity manifests itself in the complementarity of the equations which are invariant under the Galilean group, in some cases, and the Lorentz group, in other cases, as can be detected experimentally in the aberration phenomenon and the Doppler effect.
Linkage rules for plant-pollinator networks: trait complementarity or exploitation barriers?
Santamaría, Luis; Rodríguez-Gironés, Miguel A
2007-02-01
Recent attempts to examine the biological processes responsible for the general characteristics of mutualistic networks focus on two types of explanations: nonmatching biological attributes of species that prevent the occurrence of certain interactions ("forbidden links"), arising from trait complementarity in mutualist networks (as compared to barriers to exploitation in antagonistic ones), and random interactions among individuals that are proportional to their abundances in the observed community ("neutrality hypothesis"). We explored the consequences that simple linkage rules based on the first two hypotheses (complementarity of traits versus barriers to exploitation) had on the topology of plant-pollination networks. Independent of the linkage rules used, the inclusion of a small set of traits (two to four) sufficed to account for the complex topological patterns observed in real-world networks. Optimal performance was achieved by a "mixed model" that combined rules that link plants and pollinators whose trait ranges overlap ("complementarity models") and rules that link pollinators to flowers whose traits are below a pollinator-specific barrier value ("barrier models"). Deterrence of floral parasites (barrier model) is therefore at least as important as increasing pollination efficiency (complementarity model) in the evolutionary shaping of plant-pollinator networks. PMID:17253905
NASA Astrophysics Data System (ADS)
Sidorin, Anatoly
2010-01-01
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Sidorin, Anatoly
2010-01-05
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Hernandez, Pauline; Picon-Cochard, Catherine
2016-01-01
Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only
Hernandez, Pauline; Picon-Cochard, Catherine
2016-01-01
Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only
NASA Astrophysics Data System (ADS)
Sandhu, Amit
A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.
NASA Astrophysics Data System (ADS)
Joglekar, D. M.; Mitra, M.
2015-12-01
The present investigation outlines a method based on the wavelet transform to analyze the vibration response of discrete piecewise linear oscillators, representative of beams with breathing cracks. The displacement and force variables in the governing differential equation are approximated using Daubechies compactly supported wavelets. An iterative scheme is developed to arrive at the optimum transform coefficients, which are back-transformed to obtain the time-domain response. A time-integration scheme, solving a linear complementarity problem at every time step, is devised to validate the proposed wavelet-based method. Applicability of the proposed solution technique is demonstrated by considering several test cases involving a cracked cantilever beam modeled as a bilinear SDOF system subjected to a harmonic excitation. In particular, the presence of higher-order harmonics, originating from the piecewise linear behavior, is confirmed in all the test cases. Parametric study involving the variations in the crack depth, and crack location is performed to bring out their effect on the relative strengths of higher-order harmonics. Versatility of the method is demonstrated by considering the cases such as mixed-frequency excitation and an MDOF oscillator with multiple bilinear springs. In addition to purporting the wavelet-based method as a viable alternative to analyze the response of piecewise linear oscillators, the proposed method can be easily extended to solve inverse problems unlike the other direct time integration schemes.
Hydro-elastic complementarity in black branes at large D
NASA Astrophysics Data System (ADS)
Emparan, Roberto; Izumi, Keisuke; Luna, Raimon; Suzuki, Ryotaku; Tanabe, Kentaro
2016-06-01
We obtain the effective theory for the non-linear dynamics of black branes — both neutral and charged, in asymptotically flat or Anti-deSitter spacetimes — to leading order in the inverse-dimensional expansion. We find that black branes evolve as viscous fluids, but when they settle down they are more naturally viewed as solutions of an elastic soap-bubble theory. The two views are complementary: the same variable is regarded in one case as the energy density of the fluid, in the other as the deformation of the elastic membrane. The large- D theory captures finite-wavelength phenomena beyond the conventional reach of hydrodynamics. For asymptotically flat charged black branes (either Reissner-Nordstrom or p-brane-charged black branes) it yields the non-linear evolution of the Gregory-Laflamme instability at large D and its endpoint at stable non-uniform black branes. For Reissner-Nordstrom AdS black branes we find that sound perturbations do not propagate (have purely imaginary frequency) when their wavelength is below a certain charge-dependent value. We also study the polarization of black branes induced by an external electric field.
A Structural Connection between Linear and 0-1 Integer Linear Formulations
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2007-01-01
The connection between linear and 0-1 integer linear formulations has attracted the attention of many researchers. The main reason triggering this interest has been an availability of efficient computer programs for solving pure linear problems including the transportation problem. Also the optimality of linear problems is easily verifiable…
ERIC Educational Resources Information Center
Walkiewicz, T. A.; Newby, N. D., Jr.
1972-01-01
A discussion of linear collisions between two or three objects is related to a junior-level course in analytical mechanics. The theoretical discussion uses a geometrical approach that treats elastic and inelastic collisions from a unified point of view. Experiments with a linear air track are described. (Author/TS)
NASA Astrophysics Data System (ADS)
Lu, Shen; Kim, Harrison M.
2014-12-01
This article presents a multi-scenario decomposition with complementarity constraints approach to wind farm layout design to maximize wind energy production under region boundary and inter-turbine distance constraints. A complementarity formulation technique is introduced such that the wind farm layout design can be described with a continuously differentiable optimization model, and a multi-scenario decomposition approach is proposed to ensure efficient solution with local optimality. To combine global exploration and local optimization, a hybrid solution algorithm is presented, which combines the multi-scenario approach with a bi-objective genetic algorithm that maximizes energy production and minimizes constraint violations simultaneously. A numerical case study demonstrates the effectiveness of the proposed approach.
Plant diversity increases spatio-temporal niche complementarity in plant-pollinator interactions.
Venjakob, Christine; Klein, Alexandra-Maria; Ebeling, Anne; Tscharntke, Teja; Scherber, Christoph
2016-04-01
Ongoing biodiversity decline impairs ecosystem processes, including pollination. Flower visitation, an important indicator of pollination services, is influenced by plant species richness. However, the spatio-temporal responses of different pollinator groups to plant species richness have not yet been analyzed experimentally. Here, we used an experimental plant species richness gradient to analyze plant-pollinator interactions with an unprecedented spatio-temporal resolution. We observed four pollinator functional groups (honeybees, bumblebees, solitary bees, and hoverflies) in experimental plots at three different vegetation strata between sunrise and sunset. Visits were modified by plant species richness interacting with time and space. Furthermore, the complementarity of pollinator functional groups in space and time was stronger in species-rich mixtures. We conclude that high plant diversity should ensure stable pollination services, mediated via spatio-temporal niche complementarity in flower visitation. PMID:27069585
Illustration of quantum complementarity using single photons interfering on a grating
NASA Astrophysics Data System (ADS)
Jacques, V.; Lai, N. D.; Dréau, A.; Zheng, D.; Chauvat, D.; Treussart, F.; Grangier, P.; Roch, J.-F.
2008-12-01
A recent experiment performed by Afshar et al (2007 Found. Phys. 37 295-305) has been interpreted as a violation of Bohr's complementarity principle between interference visibility and which-path information (WPI) in a two-path interferometer. We have reproduced this experiment, using true single-photon pulses propagating in a two-path wavefront-splitting interferometer realized with a Fresnel's biprism, and followed by a grating with adjustable transmitting slits. The measured values of interference visibility V and WPI, characterized by the distinguishability parameter D, are found to obey the complementarity relation V2+D2<=1. This result demonstrates that the experiment can be perfectly explained by the standard interpretation of quantum mechanics.
Sobolev, Vladimir; Eyal, Eran; Gerzon, Sergey; Potapov, Vladimir; Babor, Mariana; Prilusky, Jaime; Edelman, Marvin
2005-07-01
We describe a suite of SPACE tools for analysis and prediction of structures of biomolecules and their complexes. LPC/CSU software provides a common definition of inter-atomic contacts and complementarity of contacting surfaces to analyze protein structure and complexes. In the current version of LPC/CSU, analyses of water molecules and nucleic acids have been added, together with improved and expanded visualization options using Chime or Java based Jmol. The SPACE suite includes servers and programs for: structural analysis of point mutations (MutaProt); side chain modeling based on surface complementarity (SCCOMP); building a crystal environment and analysis of crystal contacts (CryCo); construction and analysis of protein contact maps (CMA) and molecular docking software (LIGIN). The SPACE suite is accessed at http://ligin.weizmann.ac.il/space. PMID:15980496
Linearly Adjustable International Portfolios
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-09-30
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
Caliman, Adriano; Carneiro, Luciana S; Leal, João J F; Farjalla, Vinicius F; Bozelli, Reinaldo L; Esteves, Francisco A
2012-01-01
Tests of the biodiversity and ecosystem functioning (BEF) relationship have focused little attention on the importance of interactions between species diversity and other attributes of ecological communities such as community biomass. Moreover, BEF research has been mainly derived from studies measuring a single ecosystem process that often represents resource consumption within a given habitat. Focus on single processes has prevented us from exploring the characteristics of ecosystem processes that can be critical in helping us to identify how novel pathways throughout BEF mechanisms may operate. Here, we investigated whether and how the effects of biodiversity mediated by non-trophic interactions among benthic bioturbator species vary according to community biomass and ecosystem processes. We hypothesized that (1) bioturbator biomass and species richness interact to affect the rates of benthic nutrient regeneration [dissolved inorganic nitrogen (DIN) and total dissolved phosphorus (TDP)] and consequently bacterioplankton production (BP) and that (2) the complementarity effects of diversity will be stronger on BP than on nutrient regeneration because the former represents a more integrative process that can be mediated by multivariate nutrient complementarity. We show that the effects of bioturbator diversity on nutrient regeneration increased BP via multivariate nutrient complementarity. Consistent with our prediction, the complementarity effects were significantly stronger on BP than on DIN and TDP. The effects of the biomass-species richness interaction on complementarity varied among the individual processes, but the aggregated measures of complementarity over all ecosystem processes were significantly higher at the highest community biomass level. Our results suggest that the complementarity effects of biodiversity can be stronger on more integrative ecosystem processes, which integrate subsidiary "simpler" processes, via multivariate complementarity. In
Caliman, Adriano; Carneiro, Luciana S.; Leal, João J. F.; Farjalla, Vinicius F.; Bozelli, Reinaldo L.; Esteves, Francisco A.
2012-01-01
Tests of the biodiversity and ecosystem functioning (BEF) relationship have focused little attention on the importance of interactions between species diversity and other attributes of ecological communities such as community biomass. Moreover, BEF research has been mainly derived from studies measuring a single ecosystem process that often represents resource consumption within a given habitat. Focus on single processes has prevented us from exploring the characteristics of ecosystem processes that can be critical in helping us to identify how novel pathways throughout BEF mechanisms may operate. Here, we investigated whether and how the effects of biodiversity mediated by non-trophic interactions among benthic bioturbator species vary according to community biomass and ecosystem processes. We hypothesized that (1) bioturbator biomass and species richness interact to affect the rates of benthic nutrient regeneration [dissolved inorganic nitrogen (DIN) and total dissolved phosphorus (TDP)] and consequently bacterioplankton production (BP) and that (2) the complementarity effects of diversity will be stronger on BP than on nutrient regeneration because the former represents a more integrative process that can be mediated by multivariate nutrient complementarity. We show that the effects of bioturbator diversity on nutrient regeneration increased BP via multivariate nutrient complementarity. Consistent with our prediction, the complementarity effects were significantly stronger on BP than on DIN and TDP. The effects of the biomass-species richness interaction on complementarity varied among the individual processes, but the aggregated measures of complementarity over all ecosystem processes were significantly higher at the highest community biomass level. Our results suggest that the complementarity effects of biodiversity can be stronger on more integrative ecosystem processes, which integrate subsidiary “simpler” processes, via multivariate complementarity. In
Todres, L; Wheeler, S
2001-02-01
The focus of this paper draws on the thinking of Husserl, Dilthey and Heidegger to identify elements of the phenomenological movement that can provide focus and direction for qualitative research in nursing. The authors interpret this tradition in two ways: emphasizing the possible complementarity of phenomenology, hermeneutics and existentialism, and demonstrating how these emphases ask for grounding, reflexivity and humanization in qualitative research. The paper shows that the themes of grounding, reflexivity and humanization are particularly important for nursing research. PMID:11137717
Complementarity among four highly productive grassland species depends on resource availability.
Roscher, Christiane; Schmid, Bernhard; Kolle, Olaf; Schulze, Ernst-Detlef
2016-06-01
Positive species richness-productivity relationships are common in biodiversity experiments, but how resource availability modifies biodiversity effects in grass-legume mixtures composed of highly productive species is yet to be explicitly tested. We addressed this question by choosing two grasses (Arrhenatherum elatius and Dactylis glomerata) and two legumes (Medicago × varia and Onobrychis viciifolia) which are highly productive in monocultures and dominant in mixtures (the Jena Experiment). We established monocultures, all possible two- and three-species mixtures, and the four-species mixture under three different resource supply conditions (control, fertilization, and shading). Compared to the control, community biomass production decreased under shading (-56 %) and increased under fertilization (+12 %). Net diversity effects (i.e., mixture minus mean monoculture biomass) were positive in the control and under shading (on average +15 and +72 %, respectively) and negative under fertilization (-10 %). Positive complementarity effects in the control suggested resource partitioning and facilitation of growth through symbiotic N2 fixation by legumes. Positive complementarity effects under shading indicated that resource partitioning is also possible when growth is carbon-limited. Negative complementarity effects under fertilization suggested that external nutrient supply depressed facilitative grass-legume interactions due to increased competition for light. Selection effects, which quantify the dominance of species with particularly high monoculture biomasses in the mixture, were generally small compared to complementarity effects, and indicated that these species had comparable competitive strengths in the mixture. Our study shows that resource availability has a strong impact on the occurrence of positive diversity effects among tall and highly productive grass and legume species. PMID:26932467
Brown, Marion B; Schlacher, Thomas A; Schoeman, David S; Weston, Michael A; Huijbers, Chantal M; Olds, Andrew D; Connolly, Rod M
2015-10-01
Species composition is expected to alter ecological function in assemblages if species traits differ strongly. Such effects are often large and persistent for nonnative carnivores invading islands. Alternatively, high similarity in traits within assemblages creates a degree of functional redundancy in ecosystems. Here we tested whether species turnover results in functional ecological equivalence or complementarity, and whether invasive carnivores on islands significantly alter such ecological function. The model system consisted of vertebrate scavengers (dominated by raptors) foraging on animal carcasses on ocean beaches on two Australian islands, one with and one without invasive red foxes (Vulpes vulpes). Partitioning of scavenging events among species, carcass removal rates, and detection speeds were quantified using camera traps baited with fish carcasses at the dune-beach interface. Complete segregation of temporal foraging niches between mammals (nocturnal) and birds (diurnal) reflects complementarity in carrion utilization. Conversely, functional redundancy exists within the bird guild where several species of raptors dominate carrion removal in a broadly similar way. As predicted, effects of red foxes were large. They substantially changed the nature and rate of the scavenging process in the system: (1) foxes consumed over half (55%) of all carrion available at night, compared with negligible mammalian foraging at night on the fox-free island, and (2) significant shifts in the composition of the scavenger assemblages consuming beach-cast carrion are the consequence of fox invasion at one island. Arguably, in the absence of other mammalian apex predators, the addition of red foxes creates a new dimension of functional complementarity in beach food webs. However, this functional complementarity added by foxes is neither benign nor neutral, as marine carrion subsidies to coastal red fox populations are likely to facilitate their persistence as exotic
Kraut, Daniel A; Sigala, Paul A; Pybus, Brandon; Liu, Corey W; Ringe, Dagmar; Petsko, Gregory A
2006-01-01
A longstanding proposal in enzymology is that enzymes are electrostatically and geometrically complementary to the transition states of the reactions they catalyze and that this complementarity contributes to catalysis. Experimental evaluation of this contribution, however, has been difficult. We have systematically dissected the potential contribution to catalysis from electrostatic complementarity in ketosteroid isomerase. Phenolates, analogs of the transition state and reaction intermediate, bind and accept two hydrogen bonds in an active site oxyanion hole. The binding of substituted phenolates of constant molecular shape but increasing p K a models the charge accumulation in the oxyanion hole during the enzymatic reaction. As charge localization increases, the NMR chemical shifts of protons involved in oxyanion hole hydrogen bonds increase by 0.50–0.76 ppm/p K a unit, suggesting a bond shortening of ˜0.02 Å/p K a unit. Nevertheless, there is little change in binding affinity across a series of substituted phenolates (ΔΔG = −0.2 kcal/mol/p K a unit). The small effect of increased charge localization on affinity occurs despite the shortening of the hydrogen bonds and a large favorable change in binding enthalpy (ΔΔH = −2.0 kcal/mol/p K a unit). This shallow dependence of binding affinity suggests that electrostatic complementarity in the oxyanion hole makes at most a modest contribution to catalysis of ˜300-fold. We propose that geometrical complementarity between the oxyanion hole hydrogen-bond donors and the transition state oxyanion provides a significant catalytic contribution, and suggest that KSI, like other enzymes, achieves its catalytic prowess through a combination of modest contributions from several mechanisms rather than from a single dominant contribution. PMID:16602823
Climate Change Mitigation and Adaptation in the Land Use Sector: From Complementarity to Synergy
NASA Astrophysics Data System (ADS)
Duguma, Lalisa A.; Minang, Peter A.; van Noordwijk, Meine
2014-09-01
Currently, mitigation and adaptation measures are handled separately, due to differences in priorities for the measures and segregated planning and implementation policies at international and national levels. There is a growing argument that synergistic approaches to adaptation and mitigation could bring substantial benefits at multiple scales in the land use sector. Nonetheless, efforts to implement synergies between adaptation and mitigation measures are rare due to the weak conceptual framing of the approach and constraining policy issues. In this paper, we explore the attributes of synergy and the necessary enabling conditions and discuss, as an example, experience with the Ngitili system in Tanzania that serves both adaptation and mitigation functions. An in-depth look into the current practices suggests that more emphasis is laid on complementarity—i.e., mitigation projects providing adaptation co-benefits and vice versa rather than on synergy. Unlike complementarity, synergy should emphasize functionally sustainable landscape systems in which adaptation and mitigation are optimized as part of multiple functions. We argue that the current practice of seeking co-benefits (complementarity) is a necessary but insufficient step toward addressing synergy. Moving forward from complementarity will require a paradigm shift from current compartmentalization between mitigation and adaptation to systems thinking at landscape scale. However, enabling policy, institutional, and investment conditions need to be developed at global, national, and local levels to achieve synergistic goals.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Evidence of embodied cognition via speech and gesture complementarity
NASA Astrophysics Data System (ADS)
Chase, Evan A.; Wittmann, Michael C.
2013-01-01
We are studying how students talk and gesture about physics problems involving directionality. Students discussing physics use more than words and equations; gestures are also a meaningful element of their thinking. Data come from one-on-one interviews in which students were asked to gesture about the sign and direction of velocity, acceleration, and other quantities. Specific contexts are a ball toss in the presence and absence of air resistance, including situations where the ball starts at greater than terminal velocity. Students show an aptitude for representing up to 6 characteristics of the ball with 2 hands. They switch quickly while talking about velocity, acceleration, and the different forces, frequently representing more than one quantity using a single hand. We believe that much of their thinking resides in their hands, and that their gestures complement their speech, as indicated by moments when speech and gesture represent different quantities.
de Albuquerque, Fábio Suzart; Beier, Paul
2016-06-01
Given species inventories of all sites in a planning area, integer programming or heuristic algorithms can prioritize sites in terms of the site's complementary value, that is, the ability of the site to complement (add unrepresented species to) other sites prioritized for conservation. The utility of these procedures is limited because distributions of species are typically available only as coarse atlases or range maps, whereas conservation planners need to prioritize relatively small sites. If such coarse-resolution information can be used to identify small sites that efficiently represent species (i.e., downscaled), then such data can be useful for conservation planning. We develop and test a new type of surrogate for biodiversity, which we call downscaled complementarity. In this approach, complementarity values from large cells are downscaled to small cells, using statistical methods or simple map overlays. We illustrate our approach for birds in Spain by building models at coarse scale (50 × 50 km atlas of European birds, and global range maps of birds interpreted at the same 50 × 50 km grid size), using this model to predict complementary value for 10 × 10 km cells in Spain, and testing how well-prioritized cells represented bird distributions in an independent bird atlas of those 10 × 10 km cells. Downscaled complementarity was about 63-77% as effective as having full knowledge of the 10-km atlas data in its ability to improve on random selection of sites. Downscaled complementarity has relatively low data acquisition cost and meets representation goals well compared with other surrogates currently in use. Our study justifies additional tests to determine whether downscaled complementarity is an effective surrogate for other regions and taxa, and at spatial resolution finer than 10 × 10 km cells. Until such tests have been completed, we caution against assuming that any surrogate can reliably prioritize sites for species representation
Information complementarity: A new paradigm for decoding quantum incompatibility
Zhu, Huangjun
2015-01-01
The existence of observables that are incompatible or not jointly measurable is a characteristic feature of quantum mechanics, which lies at the root of a number of nonclassical phenomena, such as uncertainty relations, wave—particle dual behavior, Bell-inequality violation, and contextuality. However, no intuitive criterion is available for determining the compatibility of even two (generalized) observables, despite the overarching importance of this problem and intensive efforts of many researchers. Here we introduce an information theoretic paradigm together with an intuitive geometric picture for decoding incompatible observables, starting from two simple ideas: Every observable can only provide limited information and information is monotonic under data processing. By virtue of quantum estimation theory, we introduce a family of universal criteria for detecting incompatible observables and a natural measure of incompatibility, which are applicable to arbitrary number of arbitrary observables. Based on this framework, we derive a family of universal measurement uncertainty relations, provide a simple information theoretic explanation of quantitative wave—particle duality, and offer new perspectives for understanding Bell nonlocality, contextuality, and quantum precision limit. PMID:26392075
Information complementarity: A new paradigm for decoding quantum incompatibility
NASA Astrophysics Data System (ADS)
Zhu, Huangjun
2015-09-01
The existence of observables that are incompatible or not jointly measurable is a characteristic feature of quantum mechanics, which lies at the root of a number of nonclassical phenomena, such as uncertainty relations, wave—particle dual behavior, Bell-inequality violation, and contextuality. However, no intuitive criterion is available for determining the compatibility of even two (generalized) observables, despite the overarching importance of this problem and intensive efforts of many researchers. Here we introduce an information theoretic paradigm together with an intuitive geometric picture for decoding incompatible observables, starting from two simple ideas: Every observable can only provide limited information and information is monotonic under data processing. By virtue of quantum estimation theory, we introduce a family of universal criteria for detecting incompatible observables and a natural measure of incompatibility, which are applicable to arbitrary number of arbitrary observables. Based on this framework, we derive a family of universal measurement uncertainty relations, provide a simple information theoretic explanation of quantitative wave—particle duality, and offer new perspectives for understanding Bell nonlocality, contextuality, and quantum precision limit.
Complementarity of Historic Building Information Modelling and Geographic Information Systems
NASA Astrophysics Data System (ADS)
Yang, X.; Koehl, M.; Grussenmeyer, P.; Macher, H.
2016-06-01
In this paper, we discuss the potential of integrating both semantically rich models from Building Information Modelling (BIM) and Geographical Information Systems (GIS) to build the detailed 3D historic model. BIM contributes to the creation of a digital representation having all physical and functional building characteristics in several dimensions, as e.g. XYZ (3D), time and non-architectural information that are necessary for construction and management of buildings. GIS has potential in handling and managing spatial data especially exploring spatial relationships and is widely used in urban modelling. However, when considering heritage modelling, the specificity of irregular historical components makes it problematic to create the enriched model according to its complex architectural elements obtained from point clouds. Therefore, some open issues limiting the historic building 3D modelling will be discussed in this paper: how to deal with the complex elements composing historic buildings in BIM and GIS environment, how to build the enriched historic model, and why to construct different levels of details? By solving these problems, conceptualization, documentation and analysis of enriched Historic Building Information Modelling are developed and compared to traditional 3D models aimed primarily for visualization.
Sparse linear programming subprogram
Hanson, R.J.; Hiebert, K.L.
1981-12-01
This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.
Non-linear aeroelastic prediction for aircraft applications
NASA Astrophysics Data System (ADS)
de C. Henshaw, M. J.; Badcock, K. J.; Vio, G. A.; Allen, C. B.; Chamberlain, J.; Kaynes, I.; Dimitriadis, G.; Cooper, J. E.; Woodgate, M. A.; Rampurawala, A. M.; Jones, D.; Fenwick, C.; Gaitonde, A. L.; Taylor, N. V.; Amor, D. S.; Eccles, T. A.; Denley, C. J.
2007-05-01
Current industrial practice for the prediction and analysis of flutter relies heavily on linear methods and this has led to overly conservative design and envelope restrictions for aircraft. Although the methods have served the industry well, it is clear that for a number of reasons the inclusion of non-linearity in the mathematical and computational aeroelastic prediction tools is highly desirable. The increase in available and affordable computational resources, together with major advances in algorithms, mean that non-linear aeroelastic tools are now viable within the aircraft design and qualification environment. The Partnership for Unsteady Methods in Aerodynamics (PUMA) Defence and Aerospace Research Partnership (DARP) was sponsored in 2002 to conduct research into non-linear aeroelastic prediction methods and an academic, industry, and government consortium collaborated to address the following objectives: To develop useable methodologies to model and predict non-linear aeroelastic behaviour of complete aircraft. To evaluate the methodologies on real aircraft problems. To investigate the effect of non-linearities on aeroelastic behaviour and to determine which have the greatest effect on the flutter qualification process. These aims have been very effectively met during the course of the programme and the research outputs include: New methods available to industry for use in the flutter prediction process, together with the appropriate coaching of industry engineers. Interesting results in both linear and non-linear aeroelastics, with comprehensive comparison of methods and approaches for challenging problems. Additional embryonic techniques that, with further research, will further improve aeroelastics capability. This paper describes the methods that have been developed and how they are deployable within the industrial environment. We present a thorough review of the PUMA aeroelastics programme together with a comprehensive review of the relevant research
Superconducting linear actuator
NASA Technical Reports Server (NTRS)
Johnson, Bruce; Hockney, Richard
1993-01-01
Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.
Linear Algebraic Method for Non-Linear Map Analysis
Yu,L.; Nash, B.
2009-05-04
We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.
Kolakofsky, D
1982-01-01
I isolated at least 30 different vesicular stomatitis virus defective interfering (DI) genomes, distinguished by chain length, by five independent undiluted passages of a repeatedly cloned virus plaque. Labeling of the 3' hydroxyl ends of these DI genomes and RNase digestion studies demonstrated that the ends of these DI genomes were terminally complementary to different extents (approximately 46 to 200 nucleotides). Mapping studies showed that the complementary ends of all of the DI genomes were derived from the 5' ends of the nondefective minus-strand genome. Regardless of the extent of terminal complementarity, all of the DI genomes synthesized the same 46-nucleotide minus-strand leader RNA. Images PMID:6281468
Linear Programming Problems for Generalized Uncertainty
ERIC Educational Resources Information Center
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
Linear Programming across the Curriculum
ERIC Educational Resources Information Center
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Designing linear systolic arrays
Kumar, V.K.P.; Tsai, Y.C. . Dept. of Electrical Engineering)
1989-12-01
The authors develop a simple mapping technique to design linear systolic arrays. The basic idea of the technique is to map the computations of a certain class of two-dimensional systolic arrays onto one-dimensional arrays. Using this technique, systolic algorithms are derived for problems such as matrix multiplication and transitive closure on linearly connected arrays of PEs with constant I/O bandwidth. Compared to known designs in the literature, the technique leads to modular systolic arrays with constant hardware in each PE, few control lines, lexicographic data input/output, and improved delay time. The unidirectional flow of control and data in this design assures implementation of the linear array in the known fault models of wafer scale integration.
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
Emergence of complementarity and the Baconian roots of Niels Bohr's method
NASA Astrophysics Data System (ADS)
Perovic, Slobodan
2013-08-01
I argue that instead of a rather narrow focus on N. Bohr's account of complementarity as a particular and perhaps obscure metaphysical or epistemological concept (or as being motivated by such a concept), we should consider it to result from pursuing a particular method of studying physical phenomena. More precisely, I identify a strong undercurrent of Baconian method of induction in Bohr's work that likely emerged during his experimental training and practice. When its development is analyzed in light of Baconian induction, complementarity emerges as a levelheaded rather than a controversial account, carefully elicited from a comprehensive grasp of the available experimental basis, shunning hasty metaphysically motivated generalizations based on partial experimental evidence. In fact, Bohr's insistence on the "classical" nature of observations in experiments, as well as the counterintuitive synthesis of wave and particle concepts that have puzzled scholars, seem a natural outcome (an updated instance) of the inductive method. Such analysis clarifies the intricacies of early Schrödinger's critique of the account as well as Bohr's response, which have been misinterpreted in the literature. If adequate, the analysis may lend considerable support to the view that Bacon explicated the general terms of an experimentally minded strand of the scientific method, developed and refined by scientists in the following three centuries.
Malleable nature of mRNA-protein compositional complementarity and its functional significance
Hlevnjak, Mario; Zagrovic, Bojan
2015-01-01
It has recently been demonstrated that nucleobase-density profiles of typical mRNA coding sequences exhibit a complementary relationship with nucleobase-interaction propensity profiles of their cognate protein sequences. This finding supports the idea that the genetic code developed in response to direct binding interactions between amino acids and appropriate nucleobases, but also suggests that present-day mRNAs and their cognate proteins may be physicochemically complementary to each other and bind. Here, we computationally recode complete Methanocaldococcus jannaschii, Escherichia coli and Homo sapiens mRNA transcriptomes and analyze how much complementary matching of synonymous mRNAs can vary, while keeping protein sequences fixed. We show that for most proteins there exist cognate mRNAs that improve, but also significantly worsen the level of native matching (e.g. by 1.8 viz. 7.6 standard deviations on average for H. sapiens, respectively), with the least malleable proteins in this sense being strongly enriched in nuclear localization and DNA-binding functions. Even so, we show that the majority of recodings for most proteins result in pronounced complementarity. Our results suggest that the genetic code was designed for favorable, yet tunable compositional complementarity between mRNAs and their cognate proteins, supporting the hypothesis that the interactions between the two were an important defining element behind the code's origin. PMID:25753660
NASA Astrophysics Data System (ADS)
Ramirez Camargo, L.; Zink, R.; Dorner, W.
2015-07-01
Spatial assessments of the potential of renewable energy sources (RES) have become a valuable information basis for policy and decision-making. These studies, however, do not explicitly consider the variability in time of RES such as solar energy or wind. Until now, the focus is usually given to economic profitability based on yearly balances, which do not allow a comprehensive examination of RES-technologies complementarity. Incrementing temporal resolution of energy output estimation will permit to plan the aggregation of a diverse pool of RES plants i.e., to conceive a system as a virtual power plant (VPP). This paper presents a spatiotemporal analysis methodology to estimate RES potential of municipalities. The methodology relies on a combination of open source geographic information systems (GIS) processing tools and the in-memory array processing environment of Python and NumPy. Beyond the typical identification of suitable locations to build power plants, it is possible to define which of them are the best for a balanced local energy supply. A case study of a municipality, using spatial data with one square meter resolution and one hour temporal resolution, shows strong complementarity of photovoltaic and wind power. Furthermore, it is shown that a detailed deployment strategy of potential suitable locations for RES, calculated with modest computational requirements, can support municipalities to develop VPPs and improve security of supply.
What is complementarity?: Niels Bohr and the architecture of quantum theory
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2014-12-01
This article explores Bohr’s argument, advanced under the heading of ‘complementarity,’ concerning quantum phenomena and quantum mechanics, and its physical and philosophical implications. In Bohr, the term complementarity designates both a particular concept and an overall interpretation of quantum phenomena and quantum mechanics, in part grounded in this concept. While the argument of this article is primarily philosophical, it will also address, historically, the development and transformations of Bohr’s thinking, under the impact of the development of quantum theory and Bohr’s confrontation with Einstein, especially their exchange concerning the EPR experiment, proposed by Einstein, Podolsky and Rosen in 1935. Bohr’s interpretation was progressively characterized by a more radical epistemology, in its ultimate form, which was developed in the 1930s and with which I shall be especially concerned here, defined by his new concepts of phenomenon and atomicity. According to this epistemology, quantum objects are seen as indescribable and possibly even as inconceivable, and as manifesting their existence only in the effects of their interactions with measuring instruments upon those instruments, effects that define phenomena in Bohr’s sense. The absence of causality is an automatic consequence of this epistemology. I shall also consider how probability and statistics work under these epistemological conditions.
Wave-particle dualism and complementarity unraveled by a different mode
Menzel, Ralf; Puhlmann, Dirk; Heuer, Axel; Schleich, Wolfgang P.
2012-01-01
The precise knowledge of one of two complementary experimental outcomes prevents us from obtaining complete information about the other one. This formulation of Niels Bohr’s principle of complementarity when applied to the paradigm of wave-particle dualism—that is, to Young’s double-slit experiment—implies that the information about the slit through which a quantum particle has passed erases interference. In the present paper we report a double-slit experiment using two photons created by spontaneous parametric down-conversion where we observe interference in the signal photon despite the fact that we have located it in one of the slits due to its entanglement with the idler photon. This surprising aspect of complementarity comes to light by our special choice of the TEM01 pump mode. According to quantum field theory the signal photon is then in a coherent superposition of two distinct wave vectors giving rise to interference fringes analogous to two mechanical slits. PMID:22628561
Varghese, Sunil; Scott, Richard E
2004-01-01
Developing countries are exploring the role of telehealth to overcome the challenges of providing adequate health care services. However, this process faces disparities, and no complementarity in telehealth policy development. Telehealth has the potential to transcend geopolitical boundaries, yet telehealth policy developed in one jurisdiction may hamper applications in another. Understanding such policy complexities is essential for telehealth to realize its full global potential. This study investigated 12 East Asian countries that may represent a microcosm of the world, to determine if the telehealth policy response of countries could be categorized, and whether any implications could be identified for the development of complementary telehealth policy. The countries were Cambodia, China, Hong Kong, Indonesia, Japan, Malaysia, Myanmar, Singapore, South Korea, Taiwan, Thailand, and Vietnam. Three categories of country response were identified in regard to national policy support and development. The first category was "None" (Cambodia, Myanmar, and Vietnam) where international partners, driven by humanitarian concerns, lead telehealth activity. The second category was "Proactive" (China, Indonesia, Malaysia, Singapore, South Korea, Taiwan, and Thailand) where national policies were designed with the view that telehealth initiatives are a component of larger development objectives. The third was "Reactive" (Hong Kong and Japan), where policies were only proffered after telehealth activities were sustainable. It is concluded that although complementarity of telehealth policy development is not occurring, increased interjurisdictional telehealth activity, regional clusters, and concerted and coordinated effort amongst researchers, practitioners, and policy makers may alter this trend. PMID:15104917
Lombaert, Eric; Guillemaud, Thomas; Lundgren, Jonathan; Koch, Robert; Facon, Benoît; Grez, Audrey; Loomans, Antoon; Malausa, Thibaut; Nedved, Oldrich; Rhule, Emma; Staverlokk, Arnstein; Steenberg, Tove; Estoup, Arnaud
2014-12-01
Inferences about introduction histories of invasive species remain challenging because of the stochastic demographic processes involved. Approximate Bayesian computation (ABC) can help to overcome these problems, but such method requires a prior understanding of population structure over the study area, necessitating the use of alternative methods and an intense sampling design. In this study, we made inferences about the worldwide invasion history of the ladybird Harmonia axyridis by various population genetics statistical methods, using a large set of sampling sites distributed over most of the species' native and invaded areas. We evaluated the complementarity of the statistical methods and the consequences of using different sets of site samples for ABC inferences. We found that the H. axyridis invasion has involved two bridgehead invasive populations in North America, which have served as the source populations for at least six independent introductions into other continents. We also identified several situations of genetic admixture between differentiated sources. Our results highlight the importance of coupling ABC methods with more traditional statistical approaches. We found that the choice of site samples could affect the conclusions of ABC analyses comparing possible scenarios. Approaches involving independent ABC analyses on several sample sets constitute a sensible solution, complementary to standard quality controls based on the analysis of pseudo-observed data sets, to minimize erroneous conclusions. This study provides biologists without expertise in this area with detailed methodological and conceptual guidelines for making inferences about invasion routes when dealing with a large number of sampling sites and complex population genetic structures. PMID:25369988
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds
These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.
Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.
NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
Technology Transfer Automated Retrieval System (TEKTRAN)
Complementary resource use and redundancy of species that fulfil the same ecological role are two mechanisms that can increase and stabilize process rates in ecosystems. For example, predator complementarity and redundancy can determine prey consumption rates, in some cases providing invaluable cont...
ERIC Educational Resources Information Center
Stroup, Walter M.; Wilensky, Uri
2014-01-01
Placed in the larger context of broadening the engagement with systems dynamics and complexity theory in school-aged learning and teaching, this paper is intended to introduce, situate, and illustrate--with results from the use of network supported participatory simulations in classrooms--a stance we call "embedded complementarity" as an…
ERIC Educational Resources Information Center
Scupola, Ada
1999-01-01
Discussion of the publishing industry and its use of information and communication technologies focuses on the way in which electronic-commerce technologies are changing and could change the publishing processes, and develops a business complementarity model of electronic publishing to maximize profitability and improve the competitive position.…
26-10 Fab-digoxin complex: affinity and specificity due to surface complementarity.
Jeffrey, P D; Strong, R K; Sieker, L C; Chang, C Y; Campbell, R L; Petsko, G A; Haber, E; Margolies, M N; Sheriff, S
1993-01-01
We have determined the three-dimensional structures of the antigen-binding fragment of the anti-digoxin monoclonal antibody 26-10 in the uncomplexed state at 2.7 A resolution and as a complex with digoxin at 2.5 A resolution. Neither the antibody nor digoxin undergoes any significant conformational changes upon forming the complex. Digoxin interacts primarily with the antibody heavy chain and is oriented such that the carbohydrate groups are exposed to solvent and the lactone ring is buried in a deep pocket at the bottom of the combining site. Despite extensive interactions between antibody and antigen, no hydrogen bonds or salt links are formed between 26-10 and digoxin. Thus the 26-10-digoxin complex is unique among the known three-dimensional structures of antibody-antigen complexes in that specificity and high affinity arise primarily from shape complementarity. Images Fig. 1 PMID:8234291
Morin, Xavier; Fahse, Lorenz; Scherer-Lorenzen, Michael; Bugmann, Harald
2011-12-01
Understanding the link between biodiversity and ecosystem functioning (BEF) is pivotal in the context of global biodiversity loss. Yet, long-term effects have been explored only weakly, especially for forests, and no clear evidence has been found regarding the underlying mechanisms. We explore the long-term relationship between diversity and productivity using a forest succession model. Extensive simulations show that tree species richness promotes productivity in European temperate forests across a large climatic gradient, mostly through strong complementarity between species. We show that this biodiversity effect emerges because increasing species richness promotes higher diversity in shade tolerance and growth ability, which results in forests responding faster to small-scale mortality events. Our study generalises results from short-term experiments in grasslands to forest ecosystems and demonstrates that competition for light alone induces a positive effect of biodiversity on productivity, thus providing a new angle for explaining BEF relationships. PMID:21955682
Complementarity of weak lensing and peculiar velocity measurements in testing general relativity
Song, Yong-Seon; Zhao Gongbo; Bacon, David; Koyama, Kazuya; Nichol, Robert C.; Pogosian, Levon
2011-10-15
We explore the complementarity of weak lensing and galaxy peculiar velocity measurements to better constrain modifications to General Relativity. We find no evidence for deviations from General Relativity on cosmological scales from a combination of peculiar velocity measurements (for Luminous Red Galaxies in the Sloan Digital Sky Survey) with weak lensing measurements (from the Canadian France Hawaii Telescope Legacy Survey). We provide a Fisher error forecast for a Euclid-like space-based survey including both lensing and peculiar velocity measurements and show that the expected constraints on modified gravity will be at least an order of magnitude better than with present data, i.e. we will obtain {approx_equal}5% errors on the modified gravity parametrization described here. We also present a model-independent method for constraining modified gravity parameters using tomographic peculiar velocity information, and apply this methodology to the present data set.
Klohnen, Eva C; Luo, Shanhong
2003-10-01
Little is known about whether personality characteristics influence initial attraction. Because adult attachment differences influence a broad range of relationship processes, the authors examined their role in 3 experimental attraction studies. The authors tested four major attraction hypotheses--self similarity, ideal-self similarity, complementarity, and attachment security--and examined both actual and perceptual factors. Replicated analyses across samples, designs, and manipulations showed that actual security and self similarity predicted attraction. With regard to perceptual factors, ideal similarity, self similarity, and security all were significant predictors. Whereas perceptual ideal and self similarity had incremental predictive power, perceptual security's effects were subsumed by perceptual ideal similarity. Perceptual self similarity fully mediated actual attachment similarity effects, whereas ideal similarity was only a partial mediator. PMID:14561124
Sensors for linear referencing
NASA Astrophysics Data System (ADS)
Goodwin, Cecil W. H.; Lau, John W.
1998-01-01
Two solutions to the vehicle location problem are commonly discussed for Intelligent Transportation Systems (ITS): active roadside beacons and global positioning system (GPS) satellites. This paper present requirements for new linear referencing sensors, defined as sensors that will identify a vehicle's location along a roadway in terms of distance along the roadway from known points or by the automatic identification of known points. Requirements for linear referencing sensors come from new national location referencing standards being developed by initiatives of the US Department of Transportation, and from international location referencing standardization activities. Linear referencing sensors can extract information from the visual scene presented by the roadside environment, or from the environment illuminated by laser or microwave radiation. They can also be based on new, low cost techniques for labeling roads or by modulating lane reflectors or other regular road infrastructure components. Such sensors, singly and in combination, avoid the map matching problem common to vehicle navigation systems that rely on GPS, and can be deployed at much lower cost than roadside beacons, particularly when designed as one function of multi-purpose in-vehicle sensors and computers.
Cherfi, Y; Hemine, J; Douali, R; Beldjoudi, N; Ismaili, M; Leblond, J M; Legrand, C; Daoudi, A
2010-12-01
Linear and non-linear dielectric measurements were carried out on a ferroelectric liquid crystal stabilized by an anisotropic polymer network. The polymerization process was achieved at room temperature. It was performed from an achiral monomer in the ferroelectric chiral smectic C phase, exhibiting a very short helical pitch and a large polarization. The linear and non-linear dielectric spectroscopy were also completed by textural morphology as well as structural and ferroelectric characterizations. All these measurements were carried out on a pure ferroelectric liquid crystal material and on composite films containing two polymer concentrations. The increase of the polymer network density leads to a decrease of the dielectric strength determined in the linear and non-linear dielectric spectroscopy. The complementarity between the linear and non-linear dielectric measurements and their confrontation with a theoretical model allowed the simultaneous determination of some physical parameters such as macroscopic polarization, rotational viscosity and twist elastic energy. We also discuss the effect of the polymer network density on the obtained physical parameters. PMID:21107879
NASA Astrophysics Data System (ADS)
Yamasaki, Tadashi; Houseman, Gregory; Hamling, Ian; Postek, Elek
2010-05-01
We have developed a new parallelized 3-D numerical code, OREGANO_VE, for the solution of the general visco-elastic problem in a rectangular block domain. The mechanical equilibrium equation is solved using the finite element method for a (non-)linear Maxwell visco-elastic rheology. Time-dependent displacement and/or traction boundary conditions can be applied. Matrix assembly is based on a tetrahedral element defined by 4 vertex nodes and 6 nodes located at the midpoints of the edges, and within which displacement is described by a quadratic interpolation function. For evaluating viscoelastic relaxation, an explicit time-stepping algorithm (Zienkiewicz and Cormeau, Int. J. Num. Meth. Eng., 8, 821-845, 1974) is employed. We test the accurate implementation of the OREGANO_VE by comparing numerical and analytic (or semi-analytic half-space) solutions to different problems in a range of applications: (1) equilibration of stress in a constant density layer after gravity is switched on at t = 0 tests the implementation of spatially variable viscosity and non-Newtonian viscosity; (2) displacement of the welded interface between two blocks of differing viscosity tests the implementation of viscosity discontinuities, (3) displacement of the upper surface of a layer under applied normal load tests the implementation of time-dependent surface tractions (4) visco-elastic response to dyke intrusion (compared with the solution in a half-space) tests the implementation of all aspects. In each case, the accuracy of the code is validated subject to use of a sufficiently small time step, providing assurance that the OREGANO_VE code can be applied to a range of visco-elastic relaxation processes in three dimensions, including post-seismic deformation and post-glacial uplift. The OREGANO_VE code includes a capability for representation of prescribed fault slip on an internal fault. The surface displacement associated with large earthquakes can be detected by some geodetic observations
Hlaing, Lwin Mar; Fahmida, Umi; Htet, Min Kyaw; Utomo, Budi; Firmansyah, Agus; Ferguson, Elaine L
2016-07-01
Poor feeding practices result in inadequate nutrient intakes in young children in developing countries. To improve practices, local food-based complementary feeding recommendations (CFR) are needed. This cross-sectional survey aimed to describe current food consumption patterns of 12-23-month-old Myanmar children (n 106) from Ayeyarwady region in order to identify nutrient requirements that are difficult to achieve using local foods and to formulate affordable and realistic CFR to improve dietary adequacy. Weekly food consumption patterns were assessed using a 12-h weighed dietary record, single 24-h recall and a 5-d food record. Food costs were estimated by market surveys. CFR were formulated by linear programming analysis using WHO Optifood software and evaluated among mothers (n 20) using trial of improved practices (TIP). Findings showed that Ca, Zn, niacin, folate and Fe were 'problem nutrients': nutrients that did not achieve 100 % recommended nutrient intake even when the diet was optimised. Chicken liver, anchovy and roselle leaves were locally available nutrient-dense foods that would fill these nutrient gaps. The final set of six CFR would ensure dietary adequacy for five of twelve nutrients at a minimal cost of 271 kyats/d (based on the exchange rate of 900 kyats/USD at the time of data collection: 3rd quarter of 2012), but inadequacies remained for niacin, folate, thiamin, Fe, Zn, Ca and vitamin B6. TIP showed that mothers believed liver and vegetables would cause worms and diarrhoea, but these beliefs could be overcome to successfully promote liver consumption. Therefore, an acceptable set of CFR were developed to improve the dietary practices of 12-23-month-old Myanmar children using locally available foods. Alternative interventions such as fortification, however, are still needed to ensure dietary adequacy of all nutrients. PMID:26696232
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Stability of Linear Equations--Algebraic Approach
ERIC Educational Resources Information Center
Cherif, Chokri; Goldstein, Avraham; Prado, Lucio M. G.
2012-01-01
This article could be of interest to teachers of applied mathematics as well as to people who are interested in applications of linear algebra. We give a comprehensive study of linear systems from an application point of view. Specifically, we give an overview of linear systems and problems that can occur with the computed solution when the…
Numerical Linear Algebra On The CEDAR Multiprocessor
NASA Astrophysics Data System (ADS)
Meier, Ulrike; Sameh, Ahmed
1988-01-01
In this paper we describe in some detail the architectural features of the CEDAR. multiprocessor. We also discuss strategies for implementation of dense matrix computations, and present performance results on one cluster for a variety of linear system solvers, eigenvalue problem solvers, as well as algorithms for solving linear least squares problems.
An Improved Linear Tetrahedral Element for Plasticity
Puso, M
2005-04-25
A stabilized, nodally integrated linear tetrahedral is formulated and analyzed. It is well known that linear tetrahedral elements perform poorly in problems with plasticity, nearly incompressible materials, and acute bending. For a variety of reasons, linear tetrahedral elements are preferable to quadratic tetrahedral elements in most nonlinear problems. Whereas, mixed methods work well for linear hexahedral elements, they don't for linear tetrahedrals. On the other hand, automatic mesh generation is typically not feasible for building many 3D hexahedral meshes. A stabilized, nodally integrated linear tetrahedral is developed and shown to perform very well in problems with plasticity, nearly incompressible materials and acute bending. Furthermore, the formulation is analytically and numerically shown to be stable and optimally convergent. The element is demonstrated to perform well in several standard linear and nonlinear benchmarks.
Linearization algorithms for line transfer
Scott, H.A.
1990-11-06
Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.
Linear-time transitive orientation
McConnell, R.M.; Spinrad, J.P.
1997-06-01
The transitive orientation problem is the problem of assigning a direction to each edge of a graph so that the resulting digraph is transitive. A graph is a comparability graph if such an assignment is possible. We describe an O(n + m) algorithm for the transitive orientation problem, where n and m are the number of vertices and edges of the graph; full details are given in. This gives linear time bounds for maximum clique and minimum vertex coloring on comparability graphs, recognition of two-dimensional partial orders, permutation graphs, cointerval graphs, and triangulated comparability graphs, and other combinatorial problems on comparability graphs and their complements.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
NASA Astrophysics Data System (ADS)
Tanona, Scott Daniel
I develop a new analysis of Niels Bohr's Copenhagen interpretation of quantum mechanics by examining the development of his views from his earlier use of the correspondence principle in the so-called 'old quantum theory' to his articulation of the idea of complementarity in the context of the novel mathematical formalism of quantum mechanics. I argue that Bohr was motivated not by controversial and perhaps dispensable epistemological ideas---positivism or neo-Kantianism, for example---but by his own unique perspective on the difficulties of creating a new working physics of the internal structure of the atom. Bohr's use of the correspondence principle in the old quantum theory was associated with an empirical methodology that used this principle as an epistemological bridge to connect empirical phenomena with quantum models. The application of the correspondence principle required that one determine the validity of the idealizations and approximations necessary for the judicious use of classical physics within quantum theory. Bohr's interpretation of the new quantum mechanics then focused on the largely unexamined ways in which the developing abstract mathematical formalism is given empirical content by precisely this process of approximation. Significant consistency between his later interpretive framework and his forms of argument with the correspondence principle indicate that complementarity is best understood as a relationship among the various approximations and idealizations that must be made when one connects otherwise meaningless quantum mechanical symbols to empirical situations or 'experimental arrangements' described using concepts from classical physics. We discover that this relationship is unavoidable not through any sort of a priori analysis of the priority of classical concepts, but because quantum mechanics incorporates the correspondence approach in the way in which it represents quantum properties with matrices of transition probabilities, the
NASA Astrophysics Data System (ADS)
Liang, Yeong-Cherng; Spekkens, Robert W.; Wiseman, Howard M.
2011-09-01
In 1960, the mathematician Ernst Specker described a simple example of nonclassical correlations, the counter-intuitive features of which he dramatized using a parable about a seer, who sets an impossible prediction task to his daughter’s suitors. We revisit this example here, using it as an entrée to three central concepts in quantum foundations: contextuality, Bell-nonlocality, and complementarity. Specifically, we show that Specker’s parable offers a narrative thread that weaves together a large number of results, including the following: the impossibility of measurement-noncontextual and outcome-deterministic ontological models of quantum theory (the 1967 Kochen-Specker theorem), in particular, the recent state-specific pentagram proof of Klyachko; the impossibility of Bell-local models of quantum theory (Bell’s theorem), especially the proofs by Mermin and Hardy and extensions thereof; the impossibility of a preparation-noncontextual ontological model of quantum theory; the existence of triples of positive operator valued measures (POVMs) that can be measured jointly pairwise but not triplewise. Along the way, several novel results are presented: a generalization of a theorem by Fine connecting the existence of a joint distribution over outcomes of counterfactual measurements to the existence of a measurement-noncontextual and outcome-deterministic ontological model; a generalization of Klyachko’s proof of the Kochen-Specker theorem from pentagrams to a family of star polygons; a proof of the Kochen-Specker theorem in the style of Hardy’s proof of Bell’s theorem (i.e., one that makes use of the failure of the transitivity of implication for counterfactual statements); a categorization of contextual and Bell-nonlocal correlations in terms of frustrated networks; a derivation of a new inequality testing preparation noncontextuality; some novel results on the joint measurability of POVMs and the question of whether these can be modeled
Rong, Zi-Qiang; Wang, Min; Chow, Chi Hao Eugene; Zhao, Yu
2016-07-01
Highly efficient and diastereodivergent aza-Diels-Alder reactions have been developed to access either diastereomeric series of benzofuran-fused δ-lactams and dihydropyridines in nearly perfect stereoselectivity (d.r. >20:1, >99 % ee for all examples). The complementarity of N-heterocyclic carbene and chiral amine as the catalyst was demonstrated for the first time, together with an excellent level of catalytic efficiency (1 mol % loading). PMID:27219298
Design of Linear Quadratic Regulators and Kalman Filters
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L.
1986-01-01
AESOP solves problems associated with design of controls and state estimators for linear time-invariant systems. Systems considered are modeled in state-variable form by set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are linear quadratic regulator (LQR) design problem and steady-state Kalman filter design problem. AESOP is interactive. User solves design problems and analyzes solutions in single interactive session. Both numerical and graphical information available to user during the session.
Pánek, Josef; Kolář, Michal; Herrmannová, Anna; Valášek, Leoš Shivaya
2016-07-01
Nucleic acid sequence complementarity underlies many fundamental biological processes. Although first noticed a long time ago, sequence complementarity between mRNAs and ribosomal RNAs still lacks a meaningful biological interpretation. Here we used statistical analysis of large-scale sequence data sets and high-throughput computing to explore complementarity between 18S and 28S rRNAs and mRNA 3' UTR sequences. By the analysis of 27,646 full-length 3' UTR sequences from 14 species covering both protozoans and metazoans, we show that the computed 18S rRNA complementarity creates an evolutionarily conserved localization pattern centered around the ribosomal mRNA entry channel, suggesting its biological relevance and functionality. Based on this specific pattern and earlier data showing that post-termination 80S ribosomes are not stably anchored at the stop codon and can migrate in both directions to codons that are cognate to the P-site deacylated tRNA, we propose that the 18S rRNA-mRNA complementarity selectively stabilizes post-termination ribosomal complexes to facilitate ribosome recycling. We thus demonstrate that the complementarity between 18S rRNA and 3' UTRs has a non-random nature and very likely carries information with a regulatory potential for translational control. PMID:27190231
NASA Astrophysics Data System (ADS)
Borga, Marco; Baptiste, François; Zoccatelli, Davide
2016-04-01
High penetration of climate related energy sources (such as solar and small hydropower) might be facilitated by using their complementarity in order to increase the balance between energy load and generation. In this study we examine and map the complementarity between solar PV and run-of-the-river energy along the river network of catchments in the Eastern Italian Alps which are significantly affected by glaciers. We analyze energy sources complementarity across different temporal scales using two indicators: the standard deviation of the energy balance and the theoretical storage required for balancing generation and load (François et a., 2016). Temporal scales ranging from hours to years are assessed. By using a glacio-hydrological model able to simulate both the glacier and hydrology dynamics, we analyse the sensitivity of the obtained results with respect to different scenarios of glacier retreat. Reference: François, B., Hingray, B., Raynaud, D., Borga, M., Creutin, J.D., 2016: Increasing climate-related-energy penetration by integrating run-of-the river hydropower to wind/solar mix. Renewable Energy, 87, 686-696.
Quantum subsystems: Exploring the complementarity of quantum privacy and error correction
NASA Astrophysics Data System (ADS)
Jochym-O'Connor, Tomas; Kribs, David W.; Laflamme, Raymond; Plosker, Sarah
2014-09-01
This paper addresses and expands on the contents of the recent Letter [Phys. Rev. Lett. 111, 030502 (2013), 10.1103/PhysRevLett.111.030502] discussing private quantum subsystems. Here we prove several previously presented results, including a condition for a given random unitary channel to not have a private subspace (although this does not mean that private communication cannot occur, as was previously demonstrated via private subsystems) and algebraic conditions that characterize when a general quantum subsystem or subspace code is private for a quantum channel. These conditions can be regarded as the private analog of the Knill-Laflamme conditions for quantum error correction, and we explore how the conditions simplify in some special cases. The bridge between quantum cryptography and quantum error correction provided by complementary quantum channels motivates the study of a new, more general definition of quantum error-correcting code, and we initiate this study here. We also consider the concept of complementarity for the general notion of a private quantum subsystem.
Low and high energy phenomenology of quark-lepton complementarity scenarios
Hochmuth, Kathrin A.; Rodejohann, Werner
2007-04-01
We conduct a detailed analysis of the phenomenology of two predictive seesaw scenarios leading to quark-lepton complementarity. In both cases we discuss the neutrino mixing observables and their correlations, neutrinoless double beta decay and lepton flavor violating decays such as {mu}{yields}e{gamma}. We also comment on leptogenesis. The first scenario is disfavored on the level of one to two standard deviations, in particular, due to its prediction for |U{sub e3}|. There can be resonant leptogenesis with quasidegenerate heavy and light neutrinos, which would imply sizable cancellations in neutrinoless double beta decay. The decays {mu}{yields}e{gamma} and {tau}{yields}{mu}{gamma} are typically observable unless the SUSY masses approach the TeV scale. In the second scenario leptogenesis is impossible. It is, however, in perfect agreement with all oscillation data. The prediction for {mu}{yields}e{gamma} is in general too large, unless the SUSY masses are in the range of several TeV. In this case {tau}{yields}e{gamma} and {tau}{yields}{mu}{gamma} are unobservable.
López-Madrigal, Sergio; Beltrà, Aleixandre; Resurrección, Serena; Soto, Antonia; Latorre, Amparo; Moya, Andrés; Gil, Rosario
2014-01-01
Intracellular bacterial supply of essential amino acids is common among sap-feeding insects, thus complementing the scarcity of nitrogenous compounds in plant phloem. This is also the role of the two mealybug endosymbiotic systems whose genomes have been sequenced. In the nested endosymbiotic system from Planococcus citri (Pseudococcinae), “Candidatus Tremblaya princeps” and “Candidatus Moranella endobia” cooperate to synthesize essential amino acids, while in Phenacoccus avenae (Phenacoccinae) this function is performed by its single endosymbiont “Candidatus Tremblaya phenacola.” However, little is known regarding the evolution of essential amino acid supplementation strategies in other mealybug systems. To address this knowledge gap, we screened for the presence of six selected loci involved in essential amino acid biosynthesis in five additional mealybug species. We found evidence of ongoing complementarity among endosymbionts from insects of subfamily Pseudococcinae, as well as horizontal gene transfer affecting endosymbionts from insects of family Phenacoccinae, providing a more comprehensive picture of the evolutionary history of these endosymbiotic systems. Additionally, we report two diagnostic motifs to help identify invasive mealybug species. PMID:25206351
PHYSICS OF PREDETERMINED EVENTS: Complementarity States of Choice-Chance Mechanics
NASA Astrophysics Data System (ADS)
Morales, Manuel
2011-04-01
We find that the deterministic application of choice-chance mechanics, as applied in the Tempt Destiny experiment, is also reflected in the construct of the double-slit experiment and that the complementary results obtained by this treatment mirror that of Niels Bohr's principle of complementarity as well as reveal Einstein's hidden variables. Whereas the double-slit experiment serves to reveal the deterministic and indeterministic behavioral characteristics of our physical world, the Tempt Destiny experiment serves to reveal the deterministic and indeterministic behavioral characteristics of our actions. The unifying factor shared by both experiments is that they are of the same construct yielding similar results from the same energy. Given that, we seek to establish if the fundamental states of energy, i.e, certainty and probability, are indeed predetermined. Over the span of ten years, the Tempt Destiny experimental model of pairing choice and chance events has statistically obtained consistent results of absolute value. The evidence clearly infers that the fundamental mechanics of energy is a complement of two mutually exclusive mechanisms that bring into being - as opposed to revealing - the predetermined state of an event as either certain or probable, although not both simultaneously.
Volatile fractionation in the early solar system and chondrule/matrix complementarity.
Bland, Philip A; Alard, Olivier; Benedix, Gretchen K; Kearsley, Anton T; Menzies, Olwyn N; Watt, Lauren E; Rogers, Nick W
2005-09-27
Bulk chondritic meteorites and terrestrial planets show a monotonic depletion in moderately volatile and volatile elements relative to the Sun's photosphere and CI carbonaceous chondrites. Although volatile depletion was the most fundamental chemical process affecting the inner solar nebula, debate continues as to its cause. Carbonaceous chondrites are the most primitive rocks available to us, and fine-grained, volatile-rich matrix is the most primitive component in these rocks. Several volatile depletion models posit a pristine matrix, with uniform CI-like chemistry across the different chondrite groups. To understand the nature of volatile fractionation, we studied minor and trace element abundances in fine-grained matrices of a variety of carbonaceous chondrites. We find that matrix trace element abundances are characteristic for a given chondrite group; they are depleted relative to CI chondrites, but are enriched relative to bulk compositions of their parent meteorites, particularly in volatile siderophile and chalcophile elements. This enrichment produces a highly nonmonotonic trace element pattern that requires a complementary depletion in chondrule compositions to achieve a monotonic bulk. We infer that carbonaceous chondrite matrices are not pristine: they formed from a material reservoir that was already depleted in volatile and moderately volatile elements. Additional thermal processing occurred during chondrule formation, with exchange of volatile siderophile and chalcophile elements between chondrules and matrix. This chemical complementarity shows that these chondritic components formed in the same nebula region. PMID:16174733
The Space Infrared Interferometric Telescope (SPIRIT) and its Complementarity to ALMA
NASA Technical Reports Server (NTRS)
Leisawitz, Dave
2007-01-01
We report results of a pre-Formulation Phase study of SPIRIT, a candidate NASA Origins Probe mission. SPIRIT is a spatial and spectral interferometer with an operating wavelength range 25 - 400 microns. SPIRIT will provide sub-arcsecond resolution images and spectra with resolution R = 3000 in a 1 arcmin field of view to accomplish three primary scientific objectives: (1) Learn how planetary systems form from protostellar disks, and how they acquire their chemical organization; (2) Characterize the family of extrasolar planetary systems by imaging the structure in debris disks to understand how and where planets of different types form; and (3) Learn how high-redshift galaxies formed and merged to form the present-day population of galaxies. In each of these science domains, SPIRIT will yield information complementary to that obtainable with the James Webb Space Telescope (JWST)and the Atacama Large Millimeter Array (ALMA), and all three observatories could operate contemporaneously. Here we shall emphasize the SPIRIT science goals (1) and (2) and the mission's complementarity with ALMA.
NASA Astrophysics Data System (ADS)
Bruns, D.; Sperling, J.; Scheel, S.
2016-03-01
Modern applications in quantum computation and quantum communication require the precise characterization of quantum states and quantum channels. In practice, this means that one has to determine the quantum capacity of a physical system in terms of measurable quantities. Witnesses, if properly constructed, succeed in performing this task. We derive a method that is capable to compute witnesses for identifying deterministic evolutions and measurement-induced collapse processes. At the same time, applying the Choi-Jamiołkowski isomorphism, it uncovers the entanglement characteristics of bipartite quantum states. Remarkably, a statistical mixture of unitary evolutions is mapped onto mixtures of maximally entangled states, and classical separable states originate from genuine quantum-state reduction maps. Based on our treatment, we are able to witness these opposing attributes at once and, furthermore, obtain an insight into their different geometric structures. The complementarity is further underpinned by formulating a complementary Schmidt decomposition of a state in terms of maximally entangled states and discrete Fourier-transformed Schmidt coefficients.
Quark-lepton complementarity predictions for θ 23 pmns and CP violation
NASA Astrophysics Data System (ADS)
Sharma, Gazal; Chauhan, B. C.
2016-07-01
In the light of recent experimental results on θ 13 pmns , we re-investigate the complementarity between the quark and lepton mixing matrices and obtain predictions for most unsettled neutrino mixing parameters like θ 23 pmns and CP violating phase invariants J, S 1 and S 2. This paper is motivated by our previous work where in a QLC model we predicted the value for θ 13 pmns = (9 - 2 + 1 ) °, which was found to be in strong agreement with the experimental results. In the QLC model the non-trivial correlation between CKM and PMNS mixing matrices is given by a correlation matrix ( V c ). We do numerical simulation and estimate the texture of the V c and in our findings we get a small deviation from the Tri-Bi-Maximal (TBM) texture and a large from the Bi-Maximal one, which is consistent with the work already reported in literature. In the further investigation we obtain quite constrained limits for sin2 θ 23 pmns = 0. 4235 - 0.0043 + 0.0032 that is narrower to the existing ones. We also obtain the constrained limits for the three CP violating phase invariants J , S 1 and S 2:as J < 0 .0315, S 1 < 0 .12 and S 2 < 0 .08, respectively.
Zhu, Dan H.; Wang, Ping; Zhang, Wei Z.; Yuan, Yue; Li, Bin; Wang, Jiang
2015-01-01
Background Although plant diversity is postulated to resist invasion, studies have not provided consistent results, most of which were ascribed to the influences of other covariate environmental factors. Methodology/Principal Findings To explore the mechanisms by which plant diversity influences community invasibility, an experiment was conducted involving grassland sites varying in their species richness (one, two, four, eight, and sixteen species). Light interception efficiency and soil resources (total N, total P, and water content) were measured. The number of species, biomass, and the number of seedlings of the invading species decreased significantly with species richness. The presence of Patrinia scabiosaefolia Fisch. ex Trev. and Mosla dianthera (Buch.-Ham. ex Roxburgh) Maxim. significantly increased the resistance of the communities to invasion. A structural equation model showed that the richness of planted species had no direct and significant effect on invasion. Light interception efficiency had a negative effect on the invasion whereas soil water content had a positive effect. In monocultures, Antenoron filiforme (Thunb.) Rob. et Vaut. showed the highest light interception efficiency and P. scabiosaefolia recorded the lowest soil water content. With increased planted-species richness, a greater percentage of pots showed light use efficiency higher than that of A. filiforme and a lower soil water content than that in P. scabiosaefolia. Conclusions/Significance The results of this study suggest that plant diversity confers resistance to invasion, which is mainly ascribed to the sampling effect of particular species and the complementarity effect among species on resources use. PMID:26556713
Carroll, Linda J.; Rothe, J. Peter
2010-01-01
Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson’s metaphysical work on the ‘ways of knowing’. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions. PMID:20948937
Complementarity of ResourceSat-1 AWiFS and Landsat TM/ETM+ sensors
Goward, S.N.; Chander, G.; Pagnutti, M.; Marx, A.; Ryan, R.; Thomas, N.; Tetrault, R.
2012-01-01
Considerable interest has been given to forming an international collaboration to develop a virtual moderate spatial resolution land observation constellation through aggregation of data sets from comparable national observatories such as the US Landsat, the Indian ResourceSat and related systems. This study explores the complementarity of India's ResourceSat-1 Advanced Wide Field Sensor (AWiFS) with the Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+). The analysis focuses on the comparative radiometry, geometry, and spectral properties of the two sensors. Two applied assessments of these data are also explored to examine the strengths and limitations of these alternate sources of moderate resolution land imagery with specific application domains. There are significant technical differences in these imaging systems including spectral band response, pixel dimensions, swath width, and radiometric resolution which produce differences in observation data sets. None of these differences was found to strongly limit comparable analyses in agricultural and forestry applications. Overall, we found that the AWiFS and Landsat TM/ETM+ imagery are comparable and in some ways complementary, particularly with respect to temporal repeat frequency. We have found that there are limits to our understanding of the AWiFS performance, for example, multi-camera design and stability of radiometric calibration over time, that leave some uncertainty that has been better addressed for Landsat through the Image Assessment System and related cross-sensor calibration studies. Such work still needs to be undertaken for AWiFS and similar observatories that may play roles in the Global Earth Observation System of Systems Land Surface Imaging Constellation.
Probing the Complementarity of FAIMS and Strong Cation Exchange Chromatography in Shotgun Proteomics
NASA Astrophysics Data System (ADS)
Creese, Andrew J.; Shimwell, Neil J.; Larkins, Katherine P. B.; Heath, John K.; Cooper, Helen J.
2013-03-01
High field asymmetric waveform ion mobility spectrometry (FAIMS), also known as differential ion mobility spectrometry, coupled with liquid chromatography tandem mass spectrometry (LC-MS/MS) offers benefits for the analysis of complex proteomics samples. Advantages include increased dynamic range, increased signal-to-noise, and reduced interference from ions of similar m/ z. FAIMS also separates isomers and positional variants. An alternative, and more established, method of reducing sample complexity is prefractionation by use of strong cation exchange chromatography. Here, we have compared SCX-LC-MS/MS with LC-FAIMS-MS/MS for the identification of peptides and proteins from whole cell lysates from the breast carcinoma SUM52 cell line. Two FAIMS approaches are considered: (1) multiple compensation voltages within a single LC-MS/MS analysis (internal stepping) and (2) repeat LC-MS/MS analyses at different and fixed compensation voltages (external stepping). We also consider the consequence of the fragmentation method (electron transfer dissociation or collision-induced dissociation) on the workflow performance. The external stepping approach resulted in a greater number of protein and peptide identifications than the internal stepping approach for both ETD and CID MS/MS, suggesting that this should be the method of choice for FAIMS proteomics experiments. The overlap in protein identifications from the SCX method and the external FAIMS method was ~25 % for both ETD and CID, and for peptides was less than 20 %. The lack of overlap between FAIMS and SCX highlights the complementarity of the two techniques. Charge state analysis of the peptide assignments showed that the FAIMS approach identified a much greater proportion of triply-charged ions.
Shining Light on Benthic Macroalgae: Mechanisms of Complementarity in Layered Macroalgal Assemblages
Tait, Leigh W.; Hawes, Ian; Schiel, David R.
2014-01-01
Phototrophs underpin most ecosystem processes, but to do this they need sufficient light. This critical resource, however, is compromised along many marine shores by increased loads of sediments and nutrients from degraded inland habitats. Increased attenuation of total irradiance within coastal water columns due to turbidity is known to reduce species' depth limits and affect the taxonomic structure and architecture of algal-dominated assemblages, but virtually no attention has been paid to the potential for changes in spectral quality of light energy to impact production dynamics. Pioneering studies over 70 years ago showed how different pigmentation of red, green and brown algae affected absorption spectra, action spectra, and photosynthetic efficiency across the PAR (photosynthetically active radiation) spectrum. Little of this, however, has found its way into ecological syntheses of the impacts of optically active contaminants on coastal macroalgal communities. Here we test the ability of macroalgal assemblages composed of multiple functional groups (including representatives from the chlorophyta, rhodophyta and phaeophyta) to use the total light resource, including different light wavelengths and examine the effects of suspended sediments on the penetration and spectral quality of light in coastal waters. We show that assemblages composed of multiple functional groups are better able to use light throughout the PAR spectrum. Macroalgal assemblages with four sub-canopy species were between 50–75% more productive than assemblages with only one or two sub-canopy species. Furthermore, attenuation of the PAR spectrum showed both a loss of quanta and a shift in spectral distribution with depth across coastal waters of different clarity, with consequences to productivity dynamics of diverse layered assemblages. The processes of light complementarity may help provide a mechanistic understanding of how altered turbidity affects macroalgal assemblages in coastal
Greene, Stephanie L.; Kisha, Theodore J.; Yu, Long-Xi; Parra-Quijano, Mauricio
2014-01-01
A standard conservation strategy for plant genetic resources integrates in situ (on-farm or wild) and ex situ (gene or field bank) approaches. Gene bank managers collect ex situ accessions that represent a comprehensive snap shot of the genetic diversity of in situ populations at a given time and place. Although simple in theory, achieving complementary in situ and ex situ holdings is challenging. Using Trifolium thompsonii as a model insect-pollinated herbaceous perennial species, we used AFLP markers to compare genetic diversity and structure of ex situ accessions collected at two time periods (1995, 2004) from four locations, with their corresponding in situ populations sampled in 2009. Our goal was to assess the complementarity of the two approaches. We examined how gene flow, selection and genetic drift contributed to population change. Across locations, we found no difference in diversity between ex situ and in situ samples. One population showed a decline in genetic diversity over the 15 years studied. Population genetic differentiation among the four locations was significant, but weak. Association tests suggested infrequent, long distance gene flow. Selection and drift occurred, but differences due to spatial effects were three times as strong as differences attributed to temporal effects, and suggested recollection efforts could occur at intervals greater than fifteen years. An effective collecting strategy for insect pollinated herbaceous perennial species was to sample >150 plants, equalize maternal contribution, and sample along random transects with sufficient space between plants to minimize intrafamilial sampling. Quantifying genetic change between ex situ and in situ accessions allows genetic resource managers to validate ex situ collecting and maintenance protocols, develop appropriate recollection intervals, and provide an early detection mechanism for identifying problematic conditions that can be addressed to prevent further decline in
The generalized pole assignment problem. [dynamic output feedback problems
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1979-01-01
Two dynamic output feedback problems for a linear, strictly proper system are considered, along with their interrelationships. The problems are formulated in the frequency domain and investigated in terms of linear equations over rings of polynomials. Necessary and sufficient conditions are expressed using genericity.
ERIC Educational Resources Information Center
Demana, Franklin; Waits, Bert K.
1993-01-01
Discusses solutions to real-world linear particle-motion problems using graphing calculators to simulate the motion and traditional analytic methods of calculus. Applications include (1) changing circular or curvilinear motion into linear motion and (2) linear particle accelerators in physics. (MDH)
Boyce, Mark; McCrae, Malcom A; Boyce, Paul; Kim, Jan T
2016-05-01
The process by which eukaryotic viruses with segmented genomes select a complete set of genome segments for packaging into progeny virus particles is not understood. In this study a model based on the association of genome segments through specific RNA-RNA interactions driven by base pairing was formalized and tested in the Orbivirus genus of the Reoviridae family. A strategy combining screening of the genomic sequences for inter-segment complementarity with direct functional testing of inter-segment RNA-RNA interactions using reverse genetics is described in the type species of the Orbivirus genus, Bluetongue virus (BTV). Two examples, involving four of the ten BTV genomic segments, of specific inter-segment interaction motifs whose maintenance is essential for the generation of infectious virus, were identified. Equivalent inter-segment complementarities were found between the identified regions of the orthologous genome segments of all orbiviruses, including phylogenetically distant species. Specific interaction of the participating RNA segments was confirmed in vitro using electrophoretic mobility shift assays, with the interactions inhibited using oligonucleotides complementary to the interaction motif of one of the interacting partners, and also through mutagenesis of the motifs. In each example, the base pairing rather than the absolute sequence was critical to the formation of a functional inter-segment interaction, with mutations only being tolerated in rescued virus if compensating changes were made in the interacting partner to restore uninterrupted base pairing. The absolute sequence of the complementarity motifs varied between species, indicating that this newly identified phenomenon may contribute to the observed lack of reassortment between Orbivirus species. PMID:26763979
Linear collisionless Landau damping in Hilbert space
NASA Astrophysics Data System (ADS)
Zocco, Alessandro
2015-08-01
The equivalence between the Laplace transform (Landau, J. Phys. USSR 10 (1946), 25) and Hermite transform (Zocco and Schekochihin, Phys. Plasmas 18, 102309 (2011)) solutions of the linear collisionless Landau damping problem is proven.
NASA Astrophysics Data System (ADS)
Rizal, Syamsul
2000-11-01
Numerical experiments were done with the tides in Malacca Strait using a three-dimensional model, based on finite-difference and a semi-implicit numerical scheme. The numerical experiments were carried out as follows: firstly the discreted shallow water equations were solved without non-linear terms. Secondly, the case with non-linear terms was run. The results are then compared and we show that the non-linear terms play a dominant role in the Malacca Strait. They also suggest that one must be careful when ignoring these terms in order to maintain the stability of the model. Kelvin wave propagation using the analytical model is also discussed. It is found that the pattern of M 2 amplitude's lines is greatly influenced by small value of Coriolis parameter in Malacca Strait (˜3°), while the pattern of M 2 co-tidal lines is controlled by the bottom friction parameter. It is also proposed to calculate energy balance directly using the ratio of reflected and incident Kelvin wave in open boundary of analytical model. From this direct calculation, the displacement of amphidrome can be determined exactly. It can then be concluded that the Malacca Strait has actually the virtual amphidromic point, where the position of this point is away roughly 2097 km northeastward from the middle of the Strait. The total loss of energy due to bottom friction, calculated by analytical model, coincides well with that calculated by numerical model in the Malacca Strait.
LRGS: Linear Regression by Gibbs Sampling
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2016-02-01
LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.
NASA Astrophysics Data System (ADS)
Geib, Tanja; King, Stephen F.; Merle, Alexander; No, Jose Miguel; Panizzi, Luca
2016-04-01
We discuss how the intensity and the energy frontiers provide complementary constraints within a minimal model of neutrino mass involving just one new field beyond the Standard Model at accessible energy, namely a doubly charged scalar S++ and its antiparticle S-- . In particular, we focus on the complementarity between high-energy LHC searches and low-energy probes such as lepton flavor violation. Our setting is a prime example of how high- and low-energy physics can cross-fertilize each other.
Arrenberg, Sebastian; et al.,
2013-10-31
In this Report we discuss the four complementary searches for the identity of dark matter: direct detection experiments that look for dark matter interacting in the lab, indirect detection experiments that connect lab signals to dark matter in our own and other galaxies, collider experiments that elucidate the particle properties of dark matter, and astrophysical probes sensitive to non-gravitational interactions of dark matter. The complementarity among the different dark matter searches is discussed qualitatively and illustrated quantitatively in several theoretical scenarios. Our primary conclusion is that the diversity of possible dark matter candidates requires a balanced program based on all four of those approaches.
Systems of Linear Equations on a Spreadsheet.
ERIC Educational Resources Information Center
Bosch, William W.; Strickland, Jeff
1998-01-01
The Optimizer in Quattro Pro and the Solver in Excel software programs make solving linear and nonlinear optimization problems feasible for business mathematics students. Proposes ways in which the Optimizer or Solver can be coaxed into solving systems of linear equations. (ASK)
Equating Scores from Adaptive to Linear Tests
ERIC Educational Resources Information Center
van der Linden, Wim J.
2006-01-01
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
Long term fuel scheduling linear programming
Asgarpoor, S. . Dept. of Electrical Engineering); Gul, N. )
1992-01-01
This paper presents an application of linear programming (LP) revised simplex method in order to solve the fuel scheduling problem. A regression method is applied to determine the polynomial cost curves, and a separable programming technique is used to linearize the objective function and the constraints for LP application. Results based on sample data obtained from Omaha Public Power District (OPPD) are presented to demonstrate the LP application to this problem.
Zabetakis, Dan; Anderson, George P.; Bayya, Nikhil; Goldman, Ellen R.
2013-01-01
Single domain antibodies (sdAbs) are the recombinantly-expressed variable domain from camelid (or shark) heavy chain only antibodies and provide rugged recognition elements. Many sdAbs possess excellent affinity and specificity; most refold and are able to bind antigen after thermal denaturation. The sdAb A3, specific for the toxin Staphylococcal enterotoxin B (SEB), shows both sub-nanomolar affinity for its cognate antigen (0.14 nM) and an unusually high melting point of 85°C. Understanding the source of sdAb A3’s high melting temperature could provide a route for engineering improved melting temperatures into other sdAbs. The goal of this work was to determine how much of sdAb A3’s stability is derived from its complementarity determining regions (CDRs) versus its framework. Towards answering this question we constructed a series of CDR swap mutants in which the CDRs from unrelated sdAbs were integrated into A3’s framework and where A3’s CDRs were integrated into the framework of the other sdAbs. All three CDRs from A3 were moved to the frameworks of sdAb D1 (a ricin binder that melts at 50°C) and the anti-ricin sdAb C8 (melting point of 60°C). Similarly, the CDRs from sdAb D1 and sdAb C8 were moved to the sdAb A3 framework. In addition individual CDRs of sdAb A3 and sdAb D1 were swapped. Melting temperature and binding ability were assessed for each of the CDR-exchange mutants. This work showed that CDR2 plays a critical role in sdAb A3’s binding and stability. Overall, results from the CDR swaps indicate CDR interactions play a major role in the protein stability. PMID:24143255
The principles and construction of linear colliders
Rees, J.
1986-09-01
The problems posed to the designers and builders of high-energy linear colliders are discussed. Scaling laws of linear colliders are considered. The problem of attainment of small interaction areas is addressed. The physics of damping rings, which are designed to condense beam bunches in phase space, is discussed. The effect of wake fields on a particle bunch in a linac, particularly the conventional disk-loaded microwave linac structures, are discussed, as well as ways of dealing with those effects. Finally, the SLAC Linear Collider is described. 18 refs., 17 figs. (LEW)
NASA Astrophysics Data System (ADS)
Weber, Arthur L.
1989-03-01
Glyceraldehyde-3-phosphate acts as the substrate in a model of early self-replication of a phosphodiester copolymer of glycerate-3-phosphate and glycerol-3-phosphate. This model of self-replication is based on covalent complementarity in which information transfer is mediated by a single covalent bond, in contrast to multiple weak interactions that establish complementarity in nucleic acid replication. This replication model is connected to contemporary biochemistry through its use of glyceraldehyde-3-phosphate, a central metabolite of glycolysis and photosynthesis.
NASA Technical Reports Server (NTRS)
Weber, Arthur L.
1989-01-01
Glyceraldehyde-3-phosphate acts as the substrate in a model of early self-replication of a phosphodiester copolymer of glycerate-3-phosphate and glycerol-3-phosphate. This model of self-replication is based on covalent complementarity in which information transfer is mediated by a single covalent bond, in contrast to multiple weak interactions that establish complementarity in nucleic acid replication. This replication model is connected to contemporary biochemistry through its use of glyceraldehyde-3-phosphate, a central metabolite of glycolysis and photosynthesis.
NASA Astrophysics Data System (ADS)
Young, T.
This book is intended to be used as a textbook in a one-semester course at a variety of levels. Because of self-study features incorporated, it may also be used by practicing electronic engineers as a formal and thorough introduction to the subject. The distinction between linear and digital integrated circuits is discussed, taking into account digital and linear signal characteristics, linear and digital integrated circuit characteristics, the definitions for linear and digital circuits, applications of digital and linear integrated circuits, aspects of fabrication, packaging, and classification and numbering. Operational amplifiers are considered along with linear integrated circuit (LIC) power requirements and power supplies, voltage and current regulators, linear amplifiers, linear integrated circuit oscillators, wave-shaping circuits, active filters, DA and AD converters, demodulators, comparators, instrument amplifiers, current difference amplifiers, analog circuits and devices, and aspects of troubleshooting.
... gov/ Home Body Getting your period Problem periods Problem periods It’s common to have cramps or feel ... doctor Some common period problems Signs of period problems top One way to know if you may ...
... it could be a sign of a balance problem. Balance problems can make you feel unsteady or as if ... related injuries, such as hip fracture. Some balance problems are due to problems in the inner ear. ...
... often, it could be a sign of a balance problem. Balance problems can make you feel unsteady or as ... fall-related injuries, such as hip fracture. Some balance problems are due to problems in the inner ...
Technology, Linear Equations, and Buying a Car.
ERIC Educational Resources Information Center
Sandefur, James T.
1992-01-01
Discusses the use of technology in solving compound interest-rate problems that can be modeled by linear relationships. Uses a graphing calculator to solve the specific problem of determining the amount of money that can be borrowed to buy a car for a given monthly payment and interest rate. (MDH)
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
Linear stochastic optimal control and estimation
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1976-01-01
Digital program has been written to solve the LSOCE problem by using a time-domain formulation. LSOCE problem is defined as that of designing controls for linear time-invariant system which is disturbed by white noise in such a way as to minimize quadratic performance index.
A Linear Algebraic Approach to Teaching Interpolation
ERIC Educational Resources Information Center
Tassa, Tamir
2007-01-01
A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2008-01-01
We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
NASA Astrophysics Data System (ADS)
Theofilis, Vassilios
2011-01-01
This article reviews linear instability analysis of flows over or through complex two-dimensional (2D) and 3D geometries. In the three decades since it first appeared in the literature, global instability analysis, based on the solution of the multidimensional eigenvalue and/or initial value problem, is continuously broadening both in scope and in depth. To date it has dealt successfully with a wide range of applications arising in aerospace engineering, physiological flows, food processing, and nuclear-reactor safety. In recent years, nonmodal analysis has complemented the more traditional modal approach and increased knowledge of flow instability physics. Recent highlights delivered by the application of either modal or nonmodal global analysis are briefly discussed. A conscious effort is made to demystify both the tools currently utilized and the jargon employed to describe them, demonstrating the simplicity of the analysis. Hopefully this will provide new impulses for the creation of next-generation algorithms capable of coping with the main open research areas in which step-change progress can be expected by the application of the theory: instability analysis of fully inhomogeneous, 3D flows and control thereof.
JUICE: complementarity of the payload in adressing the mission science objectives
NASA Astrophysics Data System (ADS)
Titov, Dmitri; Barabash, Stas; Bruzzone, Lorenzo; Dougherty, Michele; Erd, Christian; Fletcher, Leigh; Gare, Philippe; Gladstone, Randall; Grasset, Olivier; Gurvits, Leonid; Hartogh, Paul; Hussmann, Hauke; Iess, Luciano; Jaumann, Ralf; Langevin, Yves; Palumbo, Pasquale; Piccioni, Giuseppe; Wahlund, Jan-Erik
2014-05-01
radar sounder (RIME) for exploring the surface and subsurface of the moons, and a radio science experiment (3GM) to probe the atmospheres of Jupiter and its satellites and to perform measurements of the gravity fields. An in situ package comprises a powerful particle environment package (PEP), a magnetometer (J-MAG) and a radio and plasma wave instrument (RPWI), including electric fields sensors and a Langmuir probe. An experiment (PRIDE) using ground-based Very-Long-Baseline Interferometry (VLBI) will provide precise determination of the moons ephemerides. The instruments will work together to achieve mission science objectives that otherwise cannot be achieved by a single experiment. For instance, joint J-MAG, 3GM, GALA and JANUS observations would constrain thickness of the ice shell, ocean depth and conductivity. SWI, 3GM and UVS would complement each other in the temperature sounding of the Jupiter atmosphere. The complex coupling between magnetosphere and atmosphere of Jupiter will be jointly studied by combination of aurora imaging (UVS, MAJIS, JANUS) and plasma and fields measurements (J-MAG, RPWI, PEP). The talk will give an overview of the JUICE payload focusing on complementarity and synergy between the experiments.
Linear elastic fracture mechanics primer
NASA Astrophysics Data System (ADS)
Wilson, Christopher D.
1992-07-01
This primer is intended to remove the blackbox perception of fracture mechanics computer software by structural engineers. The fundamental concepts of linear elastic fracture mechanics are presented with emphasis on the practical application of fracture mechanics to real problems. Numerous rules of thumb are provided. Recommended texts for additional reading, and a discussion of the significance of fracture mechanics in structural design are given. Griffith's criterion for crack extension, Irwin's elastic stress field near the crack tip, and the influence of small-scale plasticity are discussed. Common stress intensities factor solutions and methods for determining them are included. Fracture toughness and subcritical crack growth are discussed. The application of fracture mechanics to damage tolerance and fracture control is discussed. Several example problems and a practice set of problems are given.
Linear elastic fracture mechanics primer
NASA Technical Reports Server (NTRS)
Wilson, Christopher D.
1992-01-01
This primer is intended to remove the blackbox perception of fracture mechanics computer software by structural engineers. The fundamental concepts of linear elastic fracture mechanics are presented with emphasis on the practical application of fracture mechanics to real problems. Numerous rules of thumb are provided. Recommended texts for additional reading, and a discussion of the significance of fracture mechanics in structural design are given. Griffith's criterion for crack extension, Irwin's elastic stress field near the crack tip, and the influence of small-scale plasticity are discussed. Common stress intensities factor solutions and methods for determining them are included. Fracture toughness and subcritical crack growth are discussed. The application of fracture mechanics to damage tolerance and fracture control is discussed. Several example problems and a practice set of problems are given.
Linear models of risk assessment may be appropriate for chemicals that are initiators of carcinogenesis while threshold models of risk assessment have been proposed for promoters. he proper risk assessment model for the regulation of promoters of carcinogenesis remains an active ...
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
Winiger, Christian B; Langenegger, Simon M; Khorev, Oleg
2014-01-01
Summary Aromatic π–π stacking interactions are ubiquitous in nature, medicinal chemistry and materials sciences. They play a crucial role in the stacking of nucleobases, thus stabilising the DNA double helix. The following paper describes a series of chimeric DNA–polycyclic aromatic hydrocarbon (PAH) hybrids. The PAH building blocks are electron-rich pyrene and electron-poor perylenediimide (PDI), and were incorporated into complementary DNA strands. The hybrids contain different numbers of pyrene–PDI interactions that were found to directly influence duplex stability. As the pyrene–PDI ratio approaches 1:1, the stability of the duplexes increases with an average value of 7.5 °C per pyrene–PDI supramolecular interaction indicating the importance of electrostatic complementarity for aromatic π–π stacking interactions. PMID:25161715
Kabat, E A; Wu, T T; Bilofsky, H
1976-02-01
From collected data on variable region sequences of heavy chains of immunoglobulins, the probability of random associations of any two amino-acid residues in the complementarity-determining segments was computed, and pairs of residues occurring significantly more frequently than expected were selected by computer. Significant associations between Phe 32 and Tyr 33, Phe 32 and Glu 35, and Tyr 33 and Glu 35 were found in six proteins, all of which were mouse myeloma proteins which bound phosphorylcholine (= phosphocholine). From the x-ray structure of McPC603, Tyr 33 and Glu 35 are contacting residues; a seventh phosphorylcholine-binding mouse myeloma protein also contained Phe 32 and Tyr 33 but position 35 had only been determined as Glx and thus this position had not been selected. Met 34 occurred in all seven phosphorylcholine-binding myeoma proteins but was also present at this position in 29 other proteins and thus was not selected; it is seen in the x-ray structure not to be a contacting residue. The role of Phe 32 is not obvious but it could have some conformational influence. A human phosphorylcholine-binding myeloma protien also had Phe, Tyr, and Met at positions 32, 33, and 34, but had Asp instead of Glu at position 35 and showed a lower binding constant. The ability to use sequence data to locate residues in complementarity-determing segments making contact with antigenic determinants and those playing essentially a structural role would contribute substantially to the understanding of antibody specificity. PMID:1061162
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
NASA Astrophysics Data System (ADS)
Cacuci, Dan G.
2015-03-01
This work presents an illustrative application of the second-order adjoint sensitivity analysis methodology (2nd-ASAM) to a paradigm neutron diffusion problem, which is sufficiently simple to admit an exact solution, thereby making transparent the underlying mathematical derivations. The general theory underlying 2nd-ASAM indicates that, for a physical system comprising Nα parameters, the computation of all of the first- and second-order response sensitivities requires (per response) at most (2Nα + 1) "large-scale" computations using the first-level and, respectively, second-level adjoint sensitivity systems (1st-LASS and 2nd-LASS). Very importantly, however, the illustrative application presented in this work shows that the actual number of adjoint computations needed for computing all of the first- and second-order response sensitivities may be significantly less than (2Nα + 1) per response. For this illustrative problem, four "large-scale" adjoint computations sufficed for the complete and exact computations of all 4 first- and 10 distinct second-order derivatives. Furthermore, the construction and solution of the 2nd-LASS requires very little additional effort beyond the construction of the adjoint sensitivity system needed for computing the first-order sensitivities. Very significantly, only the sources on the right-sides of the diffusion (differential) operator needed to be modified; the left-side of the differential equations (and hence the "solver" in large-scale practical applications) remained unchanged. All of the first-order relative response sensitivities to the model parameters have significantly large values, of order unity. Also importantly, most of the second-order relative sensitivities are just as large, and some even up to twice as large as the first-order sensitivities. In the illustrative example presented in this work, the second-order sensitivities contribute little to the response variances and covariances. However, they have the
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
... labor starts before 37 completed weeks of pregnancy Problems with the umbilical cord Problems with the position of the baby, such as ... feet first Birth injuries For some of these problems, the baby may need to be delivered surgically ...
... version of this page please turn Javascript on. Balance Problems About Balance Problems Have you ever felt dizzy, lightheaded, or ... dizziness problem during the past year. Why Good Balance is Important Having good balance means being able ...
Solving a signalized traffic intersection problem with an hyperbolic penalty function
NASA Astrophysics Data System (ADS)
Melo, Teófilo; Monteiro, M. Teresa T.; Matias, João
2012-09-01
Mathematical Program with Complementarity Constraints (MPCC) finds many applications in fields such as engineering design, economic equilibrium and mathematical programming theory itself. A queueing system model resulting from a single signalized intersection regulated by pre-timed control in traffic network is considered. The model is formulated as an MPCC problem. A MATLAB implementation based on an hyperbolic penalty function is used to solve this practical problem, computing the total average waiting time of the vehicles in all queues and the green split allocation. The problem was codified in AMPL.
Finite Element Interface to Linear Solvers
Williams, Alan
2005-03-18
Sparse systems of linear equations arise in many engineering applications, including finite elements, finite volumes, and others. The solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver capable of solving all of the linear systems that arise. This motivates the desire to switch an application from one solver librwy to another, depending on the problem being solved. The interfaces provided by solver libraries differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application Can be greatly reduced by having an abstraction layer between solver libraries and the application, putting a common "face" on various solver libraries. One such abstraction layer is the Finite Element Interface to Linear Solvers (EEl), which has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory.
Dynamic collimation for linear colliders
Merminga, N.; Ruth, R.D.
1990-06-01
Experience with the SLC has indicated that backgrounds caused by the tails of the transverse beam distribution will be a serious problem for a next generation linear collider. Mechanical scrapers may not provide the best solution, because they may be damaged by the tiny, intense beams, and also because they may induce wakefield kicks large enough to cause emittance dilution. In this paper, we present a possible solution, which uses several nonlinear lenses to drive the tails of the beam to large amplitudes where they can by more easily scraped mechanically. Simulations of several different schemes are presented and evaluated with respect to effectiveness, tolerances and wakefield effects. 4 refs., 6 figs.
Linear-time algorithms for scheduling on parallel processors
Monma, C.L.
1982-01-01
Linear-time algorithms are presented for several problems of scheduling n equal-length tasks on m identical parallel processors subject to precedence constraints. This improves upon previous time bounds for the maximum lateness problem with treelike precedence constraints, the number-of-late-tasks problem without precedence constraints, and the one machine maximum lateness problem with general precedence constraints. 5 references.
Powerful Electromechanical Linear Actuator
NASA Technical Reports Server (NTRS)
Cowan, John R.; Myers, William N.
1994-01-01
Powerful electromechanical linear actuator designed to replace hydraulic actuator that provides incremental linear movements to large object and holds its position against heavy loads. Electromechanical actuator cleaner and simpler, and needs less maintenance. Two principal innovative features that distinguish new actuator are use of shaft-angle resolver as source of position feedback to electronic control subsystem and antibacklash gearing arrangement.
Richter, B.
1985-12-01
A report is given on the goals and progress of the SLAC Linear Collider. The status of the machine and the detectors are discussed and an overview is given of the physics which can be done at this new facility. Some ideas on how (and why) large linear colliders of the future should be built are given.
Linear Equations: Equivalence = Success
ERIC Educational Resources Information Center
Baratta, Wendy
2011-01-01
The ability to solve linear equations sets students up for success in many areas of mathematics and other disciplines requiring formula manipulations. There are many reasons why solving linear equations is a challenging skill for students to master. One major barrier for students is the inability to interpret the equals sign as anything other than…
Linearly polarized fiber amplifier
Kliner, Dahv A.; Koplow, Jeffery P.
2004-11-30
Optically pumped rare-earth-doped polarizing fibers exhibit significantly higher gain for one linear polarization state than for the orthogonal state. Such a fiber can be used to construct a single-polarization fiber laser, amplifier, or amplified-spontaneous-emission (ASE) source without the need for additional optical components to obtain stable, linearly polarized operation.
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
Linearization of Schwarzschild's line element - Application to the clock paradox.
NASA Technical Reports Server (NTRS)
Broucke, R.
1971-01-01
This article studies the relativistic theory of the motion of a particle in the presence of a uniform acceleration field. The problem is introduced as a linearization of the fundamental line element of general relativity. The linearized line element is a solution of Einstein's field equations. The equations of geodesics corresponding to this line element are solved and applied to the clock paradox problem.-
Linear models: permutation methods
Cade, B.S.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
NASA Technical Reports Server (NTRS)
Clancy, John P.
1988-01-01
The object of the invention is to provide a mechanical force actuator which is lightweight and manipulatable and utilizes linear motion for push or pull forces while maintaining a constant overall length. The mechanical force producing mechanism comprises a linear actuator mechanism and a linear motion shaft mounted parallel to one another. The linear motion shaft is connected to a stationary or fixed housing and to a movable housing where the movable housing is mechanically actuated through actuator mechanism by either manual means or motor means. The housings are adapted to releasably receive a variety of jaw or pulling elements adapted for clamping or prying action. The stationary housing is adapted to be pivotally mounted to permit an angular position of the housing to allow the tool to adapt to skewed interfaces. The actuator mechanisms is operated by a gear train to obtain linear motion of the actuator mechanism.
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
A Linear Bicharacteristic FDTD Method
NASA Technical Reports Server (NTRS)
Beggs, John H.
2001-01-01
The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics [1]-[7]. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to treat the outer computational boundaries naturally using the exact compatibility equations. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional freespace electromagnetic propagation and scattering problems [3], [6], [7]. This paper extends the LBS to model lossy dielectric and magnetic materials. Results are presented for several one-dimensional model problems, and the FDTD algorithm is chosen as a convenient reference for comparison.
Unified structural approach to linear flood routing
NASA Astrophysics Data System (ADS)
Kundzewicz, Zbigniew W.; Dooge, James C. I.
The structural theory of linear systems, which allows the non-homogeneous initial and boundary conditions to be expressed as part of a generalised system input, is applied to the problem of linear flood routing. The standardising functions needed to accomplish this are derived for three methods of lumped hydrologic flood routing (lag and route, Muskingum and Kalinin-Milyukov) and to three methods of distributed hydraulic flood routing (kinematic wave and two simplified forms of the linear St. Venant model). The appropriate Green's functions needed to complete the solution for these six cases are also presented.
Word Problems: A "Meme" for Our Times.
ERIC Educational Resources Information Center
Leamnson, Robert N.
1996-01-01
Discusses a novel approach to word problems that involves linear relationships between variables. Argues that working stepwise through intermediates is the way our minds actually work and therefore this should be used in solving word problems. (JRH)
NASA Technical Reports Server (NTRS)
Studer, P. A. (Inventor)
1983-01-01
A linear magnetic bearing system having electromagnetic vernier flux paths in shunt relation with permanent magnets, so that the vernier flux does not traverse the permanent magnet, is described. Novelty is believed to reside in providing a linear magnetic bearing having electromagnetic flux paths that bypass high reluctance permanent magnets. Particular novelty is believed to reside in providing a linear magnetic bearing with a pair of axially spaced elements having electromagnets for establishing vernier x and y axis control. The magnetic bearing system has possible use in connection with a long life reciprocating cryogenic refrigerator that may be used on the space shuttle.
A Technique for Determining Non-Linear Circuit Parameters from Ring Down Data
ROMERO, LOUIS; DICKEY, FRED M.; DISON, HOLLY
2003-01-01
We present a technique for determining non-linear resistances, capacitances, and inductances from ring down data in a non-linear RLC circuit. Although the governing differential equations are non-linear, we are able to solve this problem using linear least squares without doing any sort of non-linear iteration.
... is the device most commonly used for external beam radiation treatments for patients with cancer. The linear ... shape of the patient's tumor and the customized beam is directed to the patient's tumor. The beam ...
Singh, Mangal; Awasthi, Ashutosh; Soni, Sumit K.; Singh, Rakshapal; Verma, Rajesh K.; Kalra, Alok
2015-01-01
An assessment of roles of rhizospheric microbial diversity in plant growth is helpful in understanding plant-microbe interactions. Using random combinations of rhizospheric bacterial species at different richness levels, we analysed the contribution of species richness, compositions, interactions and identity on soil microbial respiration and plant biomass. We showed that bacterial inoculation in plant rhizosphere enhanced microbial respiration and plant biomass with complementary relationships among bacterial species. Plant growth was found to increase linearly with inoculation of rhizospheric bacterial communities with increasing levels of species or plant growth promoting trait diversity. However, inoculation of diverse bacterial communities having single plant growth promoting trait, i.e., nitrogen fixation could not enhance plant growth over inoculation of single bacteria. Our results indicate that bacterial diversity in rhizosphere affect ecosystem functioning through complementary relationship among plant growth promoting traits and may play significant roles in delivering microbial services to plants. PMID:26503744
NASA Technical Reports Server (NTRS)
Callier, Frank M.; Desoer, Charles A.
1991-01-01
The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus. PMID:27625568
NASA Technical Reports Server (NTRS)
Laughlin, Darren
1995-01-01
Inertial linear actuators developed to suppress residual accelerations of nominally stationary or steadily moving platforms. Function like long-stroke version of voice coil in conventional loudspeaker, with superimposed linear variable-differential transformer. Basic concept also applicable to suppression of vibrations of terrestrial platforms. For example, laboratory table equipped with such actuators plus suitable vibration sensors and control circuits made to vibrate much less in presence of seismic, vehicular, and other environmental vibrational disturbances.
Extended Decentralized Linear-Quadratic-Gaussian Control
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2000-01-01
A straightforward extension of a solution to the decentralized linear-Quadratic-Gaussian problem is proposed that allows its use for commonly encountered classes of problems that are currently solved with the extended Kalman filter. This extension allows the system to be partitioned in such a way as to exclude the nonlinearities from the essential algebraic relationships that allow the estimation and control to be optimally decentralized.
Algorithmic Questions for Linear Algebraic Groups. Ii
NASA Astrophysics Data System (ADS)
Sarkisjan, R. A.
1982-04-01
It is proved that, given a linear algebraic group defined over an algebraic number field and satisfying certain conditions, there exists an algorithm which determines whether or not two double cosets of a special type coincide in its adele group, and which enumerates all such double cosets. This result is applied to the isomorphism problem for finitely generated nilpotent groups, and also to other problems.Bibliography: 18 titles.
A Problem on Optimal Transportation
ERIC Educational Resources Information Center
Cechlarova, Katarina
2005-01-01
Mathematical optimization problems are not typical in the classical curriculum of mathematics. In this paper we show how several generalizations of an easy problem on optimal transportation were solved by gifted secondary school pupils in a correspondence mathematical seminar, how they can be used in university courses of linear programming and…
... daily activities, get around, and exercise. Having a problem with walking can make daily life more difficult. ... walk is called your gait. A variety of problems can cause an abnormal gait and lead to ...
... re not getting enough air. Sometimes mild breathing problems are from a stuffy nose or hard exercise. ... emphysema or pneumonia cause breathing difficulties. So can problems with your trachea or bronchi, which are part ...
... cord injury In some cases, your emotions or relationship problems can lead to ED, such as: Poor ... you stressed, depressed, or anxious? Are you having relationship problems? You may have a number of different ...
... ankles and toes. Other types of arthritis include gout or pseudogout. Sometimes, there is a mechanical problem ... for more information on osteoarthritis, rheumatoid arthritis and gout. How Common are Joint Problems? Osteoarthritis, which affects ...
2016-01-01
Epitope-based design of vaccines, immunotherapeutics, and immunodiagnostics is complicated by structural changes that radically alter immunological outcomes. This is obscured by expressing redundancy among linear-epitope data as fractional sequence-alignment identity, which fails to account for potentially drastic loss of binding affinity due to single-residue substitutions even where these might be considered conservative in the context of classical sequence analysis. From the perspective of immune function based on molecular recognition of epitopes, functional redundancy of epitope data (FRED) thus may be defined in a biologically more meaningful way based on residue-level physicochemical similarity in the context of antigenic cross-reaction, with functional similarity between epitopes expressed as the Shannon information entropy for differential epitope binding. Such similarity may be estimated in terms of structural differences between an immunogen epitope and an antigen epitope with reference to an idealized binding site of high complementarity to the immunogen epitope, by analogy between protein folding and ligand-receptor binding; but this underestimates potential for cross-reactivity, suggesting that epitope-binding site complementarity is typically suboptimal as regards immunologic specificity. The apparently suboptimal complementarity may reflect a tradeoff to attain optimal immune function that favors generation of immune-system components each having potential for cross-reactivity with a variety of epitopes. PMID:27274725
Caoili, Salvador Eugenio C
2016-01-01
Epitope-based design of vaccines, immunotherapeutics, and immunodiagnostics is complicated by structural changes that radically alter immunological outcomes. This is obscured by expressing redundancy among linear-epitope data as fractional sequence-alignment identity, which fails to account for potentially drastic loss of binding affinity due to single-residue substitutions even where these might be considered conservative in the context of classical sequence analysis. From the perspective of immune function based on molecular recognition of epitopes, functional redundancy of epitope data (FRED) thus may be defined in a biologically more meaningful way based on residue-level physicochemical similarity in the context of antigenic cross-reaction, with functional similarity between epitopes expressed as the Shannon information entropy for differential epitope binding. Such similarity may be estimated in terms of structural differences between an immunogen epitope and an antigen epitope with reference to an idealized binding site of high complementarity to the immunogen epitope, by analogy between protein folding and ligand-receptor binding; but this underestimates potential for cross-reactivity, suggesting that epitope-binding site complementarity is typically suboptimal as regards immunologic specificity. The apparently suboptimal complementarity may reflect a tradeoff to attain optimal immune function that favors generation of immune-system components each having potential for cross-reactivity with a variety of epitopes. PMID:27274725
Individualized Math Problems in Simple Equations. Oregon Vo-Tech Mathematics Problem Sets.
ERIC Educational Resources Information Center
Cosler, Norma, Ed.
This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require solution of linear equations, systems…
The 'hard problem' and the quantum physicists. Part 1: the first generation.
Smith, C U M
2006-07-01
All four of the most important figures in the early twentieth-century development of quantum physics-Niels Bohr, Erwin Schroedinger, Werner Heisenberg and Wolfgang Pauli-had strong interests in the traditional mind-brain, or 'hard,' problem. This paper reviews their approach to this problem, showing the influence of Bohr's complementarity thesis, the significance of Schroedinger's small book, 'What is life?,' the updated Platonism of Heisenberg and, perhaps most interesting of all, the interaction of Carl Jung and Wolfgang Pauli in the latter's search for a unification of mind and matter. PMID:16446022
A linear combination of modified Bessel functions
NASA Technical Reports Server (NTRS)
Shitzer, A.; Chato, J. C.
1971-01-01
A linear combination of modified Bessel functions is defined, discussed briefly, and tabulated. This combination was found to recur in the analysis of various heat transfer problems and in the analysis of the thermal behavior of living tissue when modeled by cylindrical shells.
Optical systolic solutions of linear algebraic equations
NASA Technical Reports Server (NTRS)
Neuman, C. P.; Casasent, D.
1984-01-01
The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.
Radiation Hydrodynamics Test Problems with Linear Velocity Profiles
Hendon, Raymond C.; Ramsey, Scott D.
2012-08-22
As an extension of the works of Coggeshall and Ramsey, a class of analytic solutions to the radiation hydrodynamics equations is derived for code verification purposes. These solutions are valid under assumptions including diffusive radiation transport, a polytropic gas equation of state, constant conductivity, separable flow velocity proportional to the curvilinear radial coordinate, and divergence-free heat flux. In accordance with these assumptions, the derived solution class is mathematically invariant with respect to the presence of radiative heat conduction, and thus represents a solution to the compressible flow (Euler) equations with or without conduction terms included. With this solution class, a quantitative code verification study (using spatial convergence rates) is performed for the cell-centered, finite volume, Eulerian compressible flow code xRAGE developed at Los Alamos National Laboratory. Simulation results show near second order spatial convergence in all physical variables when using the hydrodynamics solver only, consistent with that solver's underlying order of accuracy. However, contrary to the mathematical properties of the solution class, when heat conduction algorithms are enabled the calculation does not converge to the analytic solution.
Linear optoacoustic underwater communication.
Blackmon, Fletcher; Estes, Lee; Fain, Gilbert
2005-06-20
The linear mechanism for optical-to-acoustic energy conversion is explored for optoacoustic communication from an in-air platform or surface vessel to a submerged vessel such as a submarine or unmanned undersea vehicle. The communication range that can be achieved is addressed. A number of conventional signals used in underwater acoustic telemetry applications are shown to be capable of being generated experimentally through the linear optoacoustic regime conversion process. These results are in agreement with simulation based on current theoretical models. A number of practical issues concerning linear optoacoustic communication are addressed that lead to a formulation of a linear-regime optoacoustic communication scheme. The use of oblique laser beam incidence at the air-water interface to obtain considerable in-air range from the laser source to the in-water receiver is addressed. Also, the effect of oblique incidence on in-water range is examined. Next, the optimum and suboptimum linear optoacoustic sound-generation techniques for selecting the optical wavelength and signaling frequency for optimizing in-water range are addressed and discussed. Optoacoustic communication techniques employing M-ary frequency shift keying and multifrequency shift keying are then compared with regard to communication parameters such as bandwidth, data rate, range coverage, and number of lasers employed. PMID:15989059
Lorentz Invariance Violation: the Latest Fermi Results and the GRB-AGN Complementarity
NASA Technical Reports Server (NTRS)
Bolmont, J.; Vasileiou, V.; Jacholkowska, A.; Piron, F.; Couturier, C.; Granot, J.; Stecker, F. W.; Cohen-Tanugi, J.; Longo, F.
2013-01-01
Because they are bright and distant, Gamma-ray Bursts (GRBs) have been used for more than a decade to test propagation of photons and to constrain relevant Quantum Gravity (QG) models in which the velocity of photons in vacuum can depend on their energy. With its unprecedented sensitivity and energy coverage, the Fermi satellite has provided the most constraining results on the QG energy scale so far. In this talk, the latest results obtained from the analysis of four bright GRBs observed by the Large Area Telescope will be reviewed. These robust results, cross-checked using three different analysis techniques set the limit on QG energy scale at E(sub QG,1) greater than 7.6 times the Planck energy for linear dispersion and E(sub QG,2) greater than 1.3 x 10(exp 11) gigaelectron volts for quadratic dispersion (95% CL). After describing the data and the analysis techniques in use, results will be discussed and confronted to latest constraints obtained with Active Galactic Nuclei.
Liu, Bitao; Li, Hongbo; Zhu, Biao; Koide, Roger T; Eissenstat, David M; Guo, Dali
2015-10-01
In most cases, both roots and mycorrhizal fungi are needed for plant nutrient foraging. Frequently, the colonization of roots by arbuscular mycorrhizal (AM) fungi seems to be greater in species with thick and sparsely branched roots than in species with thin and densely branched roots. Yet, whether a complementarity exists between roots and mycorrhizal fungi across these two types of root system remains unclear. We measured traits related to nutrient foraging (root morphology, architecture and proliferation, AM colonization and extramatrical hyphal length) across 14 coexisting AM subtropical tree species following root pruning and nutrient addition treatments. After root pruning, species with thinner roots showed more root growth, but lower mycorrhizal colonization, than species with thicker roots. Under multi-nutrient (NPK) addition, root growth increased, but mycorrhizal colonization decreased significantly, whereas no significant changes were found under nitrogen or phosphate additions. Moreover, root length proliferation was mainly achieved by altering root architecture, but not root morphology. Thin-root species seem to forage nutrients mainly via roots, whereas thick-root species rely more on mycorrhizal fungi. In addition, the reliance on mycorrhizal fungi was reduced by nutrient additions across all species. These findings highlight complementary strategies for nutrient foraging across coexisting species with contrasting root traits. PMID:25925733
Mehra, J.
1987-05-01
In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger during 1920-1927 are treated. From the formulation of quantum mechanics in 1925-1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory - formulated in fall 1926 by Dirac, London, and Jordan - Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of quantum mechanics and of physical theory as such - formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930 - were continued during the next decades. All these aspects are briefly summarized.
Durand, Stéphanie; Sancelme, Martine; Besse-Hoggan, Pascale; Combourieu, Bruno
2010-09-01
Enhanced knowledge of pesticide transformation products formed in the environment could lead to both accurate estimates of the overall effects of these compounds on environmental ecosystems and human health and improved removal processes. These compounds can present chemical and environmental behaviours completely different from the starting active ingredient. The difficulty lies on their identification or/and their quantification due to the lack of analytical reference standards. In this context, ex situ Nuclear Magnetic Resonance (NMR) and Liquid Chromatography-NMR (LC-NMR) were used as complementary tools to LC-Mass Spectrometry (MS) to define the metabolic pathway of mesotrione, an emergent herbicide, by the bacterial strain Bacillus sp. 3B6. The complementarities of ex situ and LC-NMR allowed us to unambiguously identify six metabolites whereas the structures of only four metabolites were suggested by LC-MS. The presence of a new metabolic pathway was evidenced by NMR. These results demonstrate that NMR and LC-NMR spectroscopy provided unambiguous structural information for xenobiotic metabolic profiling, even at moderate magnetic field and allowed direct absolute quantification despite the lack of commercial or synthetic standards, required for LC-MS techniques. PMID:20692682
Linear solvers on multiprocessor machines
Kalogerakis, M.A.
1986-01-01
Two new methods are introduced for the parallel solution of banded linear systems on multiprocessor machines. Moreover, some new techniques are obtained as variations of the two methods that are applicable to special instances of the problem. Comparisons with the best known methods are performed, from which it is concluded that the two methods are superior, while their variations for special instances are, in general, competitive and in some cases best. In the process, some new results on the parallel prefix problem are obtained and a new design for this problem is presented that is suitable for VLSI implementation. Furthermore, a general model is introduced for the analysis and classification of methods that are based on row transformations of matrices. It is seen that most known methods are included in this model. It is demonstrated that this model may be used as a basis for the analysis as well as the generation of important aspects of those methods, such as their arithmetic complexity and interprocessor communication requirements.
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
1993-01-01
A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.
Linear quantum feedback networks
NASA Astrophysics Data System (ADS)
Gough, J. E.; Gohm, R.; Yanagisawa, M.
2008-12-01
The mathematical theory of quantum feedback networks has recently been developed [J. Gough and M. R. James, e-print arXiv:0804.3442v2] for general open quantum dynamical systems interacting with bosonic input fields. In this article we show, for the special case of linear dynamical Markovian systems with instantaneous feedback connections, that the transfer functions can be deduced and agree with the algebraic rules obtained in the nonlinear case. Using these rules, we derive the transfer functions for linear quantum systems in series, in cascade, and in feedback arrangements mediated by beam splitter devices.
ERIC Educational Resources Information Center
Dobbs, David E.
2013-01-01
A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.
Improved Electrohydraulic Linear Actuators
NASA Technical Reports Server (NTRS)
Hamtil, James
2004-01-01
A product line of improved electrohydraulic linear actuators has been developed. These actuators are designed especially for use in actuating valves in rocket-engine test facilities. They are also adaptable to many industrial uses, such as steam turbines, process control valves, dampers, motion control, etc. The advantageous features of the improved electrohydraulic linear actuators are best described with respect to shortcomings of prior electrohydraulic linear actuators that the improved ones are intended to supplant. The flow of hydraulic fluid to the two ports of the actuator cylinder is controlled by a servo valve that is controlled by a signal from a servo amplifier that, in turn, receives an analog position-command signal (a current having a value between 4 and 20 mA) from a supervisory control system of the facility. As the position command changes, the servo valve shifts, causing a greater flow of hydraulic fluid to one side of the cylinder and thereby causing the actuator piston to move to extend or retract a piston rod from the actuator body. A linear variable differential transformer (LVDT) directly linked to the piston provides a position-feedback signal, which is compared with the position-command signal in the servo amplifier. When the position-feedback and position-command signals match, the servo valve moves to its null position, in which it holds the actuator piston at a steady position.
Resistors Improve Ramp Linearity
NASA Technical Reports Server (NTRS)
Kleinberg, L. L.
1982-01-01
Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.
Linear Classification Functions.
ERIC Educational Resources Information Center
Huberty, Carl J.; Smith, Jerry D.
Linear classification functions (LCFs) arise in a predictive discriminant analysis for the purpose of classifying experimental units into criterion groups. The relative contribution of the response variables to classification accuracy may be based on LCF-variable correlations for each group. It is proved that, if the raw response measures are…
NASA Technical Reports Server (NTRS)
Chandler, J. A. (Inventor)
1985-01-01
The linear motion valve is described. The valve spool employs magnetically permeable rings, spaced apart axially, which engage a sealing assembly having magnetically permeable pole pieces in magnetic relationship with a magnet. The gap between the ring and the pole pieces is sealed with a ferrofluid. Depletion of the ferrofluid is minimized.
PC Basic Linear Algebra Subroutines
Energy Science and Technology Software Center (ESTSC)
1992-03-09
PC-BLAS is a highly optimized version of the Basic Linear Algebra Subprograms (BLAS), a standardized set of thirty-eight routines that perform low-level operations on vectors of numbers in single and double-precision real and complex arithmetic. Routines are included to find the index of the largest component of a vector, apply a Givens or modified Givens rotation, multiply a vector by a constant, determine the Euclidean length, perform a dot product, swap and copy vectors, andmore » find the norm of a vector. The BLAS have been carefully written to minimize numerical problems such as loss of precision and underflow and are designed so that the computation is independent of the interface with the calling program. This independence is achieved through judicious use of Assembly language macros. Interfaces are provided for Lahey Fortran 77, Microsoft Fortran 77, and Ryan-McFarland IBM Professional Fortran.« less
Linear regression in astronomy. I
NASA Technical Reports Server (NTRS)
Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh
1990-01-01
Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.
Character displacement and the evolution of niche complementarity in a model biofilm community.
Ellis, Crystal N; Traverse, Charles C; Mayo-Smith, Leslie; Buskirk, Sean W; Cooper, Vaughn S
2015-02-01
Colonization of vacant environments may catalyze adaptive diversification and be followed by competition within the nascent community. How these interactions ultimately stabilize and affect productivity are central problems in evolutionary ecology. Diversity can emerge by character displacement, in which selection favors phenotypes that exploit an alternative resource and reduce competition, or by facilitation, in which organisms change the environment and enable different genotypes or species to become established. We previously developed a model of long-term experimental evolution in which bacteria attach to a plastic bead, form a biofilm, and disperse to a new bead. Here, we focus on the evolution of coexisting mutants within a population of Burkholderia cenocepacia and how their interactions affected productivity. Adaptive mutants initially competed for space, but later competition declined, consistent with character displacement and the predicted effects of the evolved mutations. The community reached a stable equilibrium as each ecotype evolved to inhabit distinct, complementary regions of the biofilm. Interactions among ecotypes ultimately became facilitative and enhanced mixed productivity. Observing the succession of genotypes within niches illuminated changing selective forces within the community, including a fundamental role for genotypes producing small colony variants that underpin chronic infections caused by B. cenocepacia. PMID:25494960
Character displacement and the evolution of niche complementarity in a model biofilm community
Ellis, Crystal N; Traverse, Charles C; Mayo-Smith, Leslie; Buskirk, Sean W; Cooper, Vaughn S
2015-01-01
Colonization of vacant environments may catalyze adaptive diversification and be followed by competition within the nascent community. How these interactions ultimately stabilize and affect productivity are central problems in evolutionary ecology. Diversity can emerge by character displacement, in which selection favors phenotypes that exploit an alternative resource and reduce competition, or by facilitation, in which organisms change the environment and enable different genotypes or species to become established. We previously developed a model of long-term experimental evolution in which bacteria attach to a plastic bead, form a biofilm, and disperse to a new bead. Here, we focus on the evolution of coexisting mutants within a population of Burkholderia cenocepacia and how their interactions affected productivity. Adaptive mutants initially competed for space, but later competition declined, consistent with character displacement and the predicted effects of the evolved mutations. The community reached a stable equilibrium as each ecotype evolved to inhabit distinct, complementary regions of the biofilm. Interactions among ecotypes ultimately became facilitative and enhanced mixed productivity. Observing the succession of genotypes within niches illuminated changing selective forces within the community, including a fundamental role for genotypes producing small colony variants that underpin chronic infections caused by B. cenocepacia. PMID:25494960
Self-consistent linearization of non-linear BEM formulations with quadratic convergence
NASA Astrophysics Data System (ADS)
Fernandes, G. R.; de Souza Neto, E. A.
2013-11-01
In this work, a general technique to obtain the self-consistent linearization of non-linear formulations of the boundary element method (BEM) is presented. In the incremental-iterative procedure required to solve the non-linear problem the convergence is quadratic, being the solution obtained from the consistent tangent operator. This technique is applied to non-linear BEM formulations for plates where two independent problems are discussed: the plate bending and the stretching problem. For both problems an equilibrium equation is written in terms of strains and internal forces and then the consistent tangent operator is derived by applying the Newton-Raphson’s scheme. The Von Mises criterion is adopted to govern the elasto-plastic material behaviour checked at points along the plate thickness, although the presented formulations can be used with any non-linear model. Numerical examples are presented showing the accuracy of the results as well as the high convergence rate of the iterative procedure.
Application of linear programming techniques for controlling linear dynamic plants in real time
NASA Astrophysics Data System (ADS)
Gabasov, R.; Kirillova, F. M.; Ha, Vo Thi Thanh
2016-03-01
The problem of controlling a linear dynamic plant in real time given its nondeterministic model and imperfect measurements of the inputs and outputs is considered. The concepts of current distributions of the initial state and disturbance parameters are introduced. The method for the implementation of disclosable loop using the separation principle is described. The optimal control problem under uncertainty conditions is reduced to the problems of optimal observation, optimal identification, and optimal control of the deterministic system. To extend the domain where a solution to the optimal control problem under uncertainty exists, a two-stage optimal control method is proposed. Results are illustrated using a dynamic plant of the fourth order.
... our e-newsletter! Aging & Health A to Z Balance Problems Basic Facts & Information What are Balance Problems? Having good balance means being able to ... Only then can you “keep your balance.” Why Balance is Important Your feelings of dizziness may last ...
ERIC Educational Resources Information Center
Foster, Colin
2012-01-01
This is the story of a real problem, not a problem that is contrived, or invented for the convenience of the appropriate planning tool. This activity by a group of students, defined simply as "8FN", might be likened to an "end of term concert". If you just happened to be a delegate at the ATM Conference 2003 you might remember the analogy. Social…
SLAPP: A systolic linear algebra parallel processor
Drake, B.L.; Luk, F.T.; Speiser, J.M.; Symanski, J.J.
1987-07-01
Systolic array computer architectures provide a means for fast computation of the linear algebra algorithms that form the building blocks of many signal-processing algorithms, facilitating their real-time computation. For applications to signal processing, the systolic array operates on matrices, an inherently parallel view of the data, using numerical linear algebra algorithms that have been suitably parallelized to efficiently utilize the available hardware. This article describes work currently underway at the Naval Ocean Systems Center, San Diego, California, to build a two-dimensional systolic array, SLAPP, demonstrating efficient and modular parallelization of key matric computations for real-time signal- and image-processing problems.
A program for identification of linear systems
NASA Technical Reports Server (NTRS)
Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.
1971-01-01
A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.
Feedback linearization application for LLRF control system
Kwon, S.; Regan, A.; Wang, Y.M.; Rohlev, T.
1999-06-01
The Low Energy Demonstration Accelerator (LEDA) being constructed at Los Alamos National Laboratory will serve as the prototype for the low energy section of Acceleration Production of Tritium (APT) accelerator. This paper addresses the problem of the LLRF control system for LEDA. The authors propose a control law which is based on exact feedback linearization coupled with gain scheduling which reduces the effect of the deterministic klystron cathode voltage ripple that is due to harmonics of the high voltage power supply and achieves tracking of desired set points. Also, they propose an estimator of the ripple and its time derivative and the estimates based feedback linearization controller.
Feedback linearization application for LLRF control system
Kwon, S.; Regan, A.; Wang, Y.M.; Rohlev, T.
1998-12-31
The Low Energy Demonstration Accelerator (LEDA) being constructed at Los Alamos National Laboratory will serve as the prototype for the low energy section of Acceleration Production of Tritium (APT) accelerator. This paper addresses the problem of the LLRF control system for LEDA. The authors propose a control law which is based on exact feedback linearization coupled with gain scheduling which reduces the effect of the deterministic klystron cathode voltage ripple that is due to harmonics of the high voltage power supply and achieves tracking of desired set points. Also, they propose an estimator of the ripple and its time derivative and the estimates based feedback linearization controller.
Pattern Search Methods for Linearly Constrained Minimization
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.
Estimators for overdetermined linear Stokes parameters
NASA Astrophysics Data System (ADS)
Furey, John
2016-05-01
The mathematics of estimating overdetermined polarization parameters is worked out within the context of the inverse modeling of linearly polarized light, and as the primary new result the general solution is presented for estimators of the linear Stokes parameters from any number of measurements. The utility of the general solution is explored in several illustrative examples including the canonical case of two orthogonal pairs. In addition to the actual utility of these estimators in Stokes analysis, the pedagogical discussion illustrates many of the considerations involved in solving the ill-posed problem of overdetermined parameter estimation. Finally, suggestions are made for using a rapidly rotating polarizer for continuously updating polarization estimates.
Are bilinear quadrilaterals better than linear triangles?
D`Azevedo, E.F.
1993-08-01
This paper compares the theoretical effectiveness of bilinear approximation over quadrilaterals with linear approximation over triangles. Anisotropic mesh transformation is used to generate asymptotically optimally efficient meshes for piecewise linear interpolation over triangles and bilinear interpolation over quadrilaterals. The theory and numerical results suggest triangles may have a slight advantage over quadrilaterals for interpolating convex data function but bilinear approximation may offer a higher order approximation for saddle-shaped functions on a well-designed mesh. This work is a basic study on optimal meshes with the intention of gaining insight into the more complex meshing problems in finite element analysis.
Reset stabilisation of positive linear systems
NASA Astrophysics Data System (ADS)
Zhao, Xudong; Yin, Yunfei; Shen, Jun
2016-09-01
In this paper, the problems of reset stabilisation for positive linear systems (PLSs) are investigated. Some properties relating to reset control of PLSs are first revealed. It is shown that these properties are different from the corresponding ones of general linear systems. Second, a class of periodic reset scheme is designed to exponentially stabilise an unstable PLS with a prescribed decay rate. Then, for a given PLS with reset control, some discussions on the upper bound of its decay rate are presented. Meanwhile, the reset stabilisation for PLSs in a special case is probed as well. Finally, two numerical examples are used to demonstrate the correctness and effectiveness of the obtained theoretical results.
Reachability analysis of rational eigenvalue linear systems
NASA Astrophysics Data System (ADS)
Xu, Ming; Chen, Liangyu; Zeng, Zhenbing; Li, Zhi-bin
2010-12-01
One of the key problems in the safety analysis of control systems is the exact computation of reachable state spaces for continuous-time systems. Issues related to the controllability and observability of these systems are well-studied in systems theory. However, there are not many results on reachability, even for general linear systems. In this study, we present a large class of linear systems with decidable reachable state spaces. This is approached by reducing the reachability analysis to real root isolation of exponential polynomials. Furthermore, we have implemented this method in a Maple package based on symbolic computation and applied to several examples successfully.
Proteome research: complementarity and limitations with respect to the RNA and DNA worlds.
Humphery-Smith, I; Cordwell, S J; Blackstock, W P
1997-08-01
-products predicted from DNA sequence is a major contribution to genomic science. The workings of software engines necessary to achieve large-scale proteome analysis are outlined, along with trends towards miniaturisation, analyte concentration and protein detection independent of staining technologies. A challenge for proteome analysis into the future will be to reduce its dependence on two-dimensional (2-D) gel electrophoresis as the preferred method of separating complex mixtures of cellular proteins. Nonetheless, proteome analysis already represents a means of efficiently complementing differential display, high density expression arrays, expressed sequence tags, direct or subtractive hybridisation, chromosomal linkage studies and nucleic acid sequencing as a problem solving tool in molecular biology. PMID:9298643
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
The International Linear Collider
NASA Astrophysics Data System (ADS)
List, Benno
2014-04-01
The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.
Banks, R.M.
1986-01-14
This patent describes a linear output nitinol engine consisting of a number of integrated communicating parts. The engine has an external support framework which is described in detail. The patent further describes a wire transport mechanism, a pair of linkage levers with a loom secured to them, a number of nitinol wires strung between the looms, and a power takeoff block secured to the linkage levers. A pulley positioned in a flip-flop supporting bracket and a power takeoff modality including a tension member connected to a power output cable in order to provide linear power output transmission is described. A method for biasing the timing and the mechanism for timing the synchronization of the throw over arms and the flip-flop of the pulley are also described.
NASA Technical Reports Server (NTRS)
Goldowsky, Michael P. (Inventor)
1987-01-01
A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.
General linear chirplet transform
NASA Astrophysics Data System (ADS)
Yu, Gang; Zhou, Yiqi
2016-03-01
Time-frequency (TF) analysis (TFA) method is an effective tool to characterize the time-varying feature of a signal, which has drawn many attentions in a fairly long period. With the development of TFA, many advanced methods are proposed, which can provide more precise TF results. However, some restrictions are introduced inevitably. In this paper, we introduce a novel TFA method, termed as general linear chirplet transform (GLCT), which can overcome some limitations existed in current TFA methods. In numerical and experimental validations, by comparing with current TFA methods, some advantages of GLCT are demonstrated, which consist of well-characterizing the signal of multi-component with distinct non-linear features, being independent to the mathematical model and initial TFA method, allowing for the reconstruction of the interested component, and being non-sensitivity to noise.
Eberly, Lynn E
2007-01-01
This chapter describes multiple linear regression, a statistical approach used to describe the simultaneous associations of several variables with one continuous outcome. Important steps in using this approach include estimation and inference, variable selection in model building, and assessing model fit. The special cases of regression with interactions among the variables, polynomial regression, regressions with categorical (grouping) variables, and separate slopes models are also covered. Examples in microbiology are used throughout. PMID:18450050
NASA Technical Reports Server (NTRS)
Johnston, D. D.
1972-01-01
An evaluation of the precise linear sun sensor relating to future mission applications was performed. The test procedures, data, and results of the dual-axis, solid-state system are included. Brief descriptions of the sensing head and of the system's operational characteristics are presented. A unique feature of the system is that multiple sensor heads with various fields of view may be used with the same electronics.
Relativistic Linear Restoring Force
ERIC Educational Resources Information Center
Clark, D.; Franklin, J.; Mann, N.
2012-01-01
We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…
Buttram, M.T.; Ginn, J.W.
1988-06-21
A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.
Combustion powered linear actuator
Fischer, Gary J.
2007-09-04
The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.
Villante, F. L.; Ricci, B.
2010-05-01
We present a new approach to studying the properties of the Sun. We consider small variations of the physical and chemical properties of the Sun with respect to standard solar model predictions and we linearize the structure equations to relate them to the properties of the solar plasma. By assuming that the (variation of) present solar composition can be estimated from the (variation of) nuclear reaction rates and elemental diffusion efficiency in the present Sun, we obtain a linear system of ordinary differential equations which can be used to calculate the response of the Sun to an arbitrary modification of the input parameters (opacity, cross sections, etc.). This new approach is intended to be a complement to the traditional methods for solar model (SM) calculation and allows us to investigate in a more efficient and transparent way the role of parameters and assumptions in SM construction. We verify that these linear solar models recover the predictions of the traditional SMs with a high level of accuracy.
NASA Astrophysics Data System (ADS)
Uhlmann, Armin
2016-03-01
This is an introduction to antilinear operators. In following Wigner the terminus antilinear is used as it is standard in Physics. Mathematicians prefer to say conjugate linear. By restricting to finite-dimensional complex-linear spaces, the exposition becomes elementary in the functional analytic sense. Nevertheless it shows the amazing differences to the linear case. Basics of antilinearity is explained in sects. 2, 3, 4, 7 and in sect. 1.2: Spectrum, canonical Hermitian form, antilinear rank one and two operators, the Hermitian adjoint, classification of antilinear normal operators, (skew) conjugations, involutions, and acq-lines, the antilinear counterparts of 1-parameter operator groups. Applications include the representation of the Lagrangian Grassmannian by conjugations, its covering by acq-lines. As well as results on equivalence relations. After remembering elementary Tomita-Takesaki theory, antilinear maps, associated to a vector of a two-partite quantum system, are defined. By allowing to write modular objects as twisted products of pairs of them, they open some new ways to express EPR and teleportation tasks. The appendix presents a look onto the rich structure of antilinear operator spaces.
Linear Scaling Electronic Structure Methods with Periodic Boundary Conditions
Gustavo E. Scuseria
2008-02-08
The methodological development and computational implementation of linear scaling quantum chemistry methods for the accurate calculation of electronic structure and properties of periodic systems (solids, surfaces, and polymers) and their application to chemical problems of DOE relevance.
... back or groin? Yes You may have a KIDNEY STONE or another serious problem. EMERGENCY See your doctor ... the bladder, called INTERSTITIAL CYSTITIS, or from a KIDNEY STONE stuck in the bladder, or a chemical in ...
... our e-newsletter! Aging & Health A to Z Kidney Problems Basic Facts & Information The kidneys are two ... the production of red blood cells. What are Kidney Diseases? For about one-third of older people, ...
... This flow chart will help direct you if hearing loss is a problem for you or a family ... may damage the inner ear. This kind of hearing loss is called OCCUPATIONAL. Prevent occupational hearing loss by ...
... For Consumers Consumer Information by Audience For Women Sleep Problems Share Tweet Linkedin Pin it More sharing ... PDF 474KB) En Español Medicines to Help You Sleep Tips for Better Sleep Basic Facts about Sleep ...
... treated differently. Common thyroid disorders and problems include: Hypothyroidism Hypothyroidism is a disorder in which your thyroid doesn’ ... normal after you get better. If you have hypothyroidism, however, the levels of T4 in your blood ...
... Inverted nipple; Nipple problems Images Female breast Intraductal papilloma Mammary gland Abnormal discharge from the nipple Normal ... 8. Read More Breast cancer Endocrine glands Intraductal papilloma Update Date 11/16/2014 Updated by: Cynthia ...
... which nothing can be seen) Vision loss and blindness are the most severe vision problems. Causes Vision ... that look faded. The most common cause of blindness in people over age 60. Eye infection, inflammation, ...
... the home Dry mouth during cancer treatment Enteral nutrition - child - managing problems Gastrostomy feeding tube - bolus Jejunostomy feeding tube Mouth and neck radiation - discharge Multiple sclerosis - discharge Stroke - discharge Update Date 5/15/2014 ...
... a person's ability to speak clearly. Some Common Speech Disorders Stuttering is a problem that interferes with fluent ... is a language disorder, while stuttering is a speech disorder. A person who stutters has trouble getting out ...
A Linear Theory for Inflatable Plates of Arbitrary Shape
NASA Technical Reports Server (NTRS)
McComb, Harvey G., Jr.
1961-01-01
A linear small-deflection theory is developed for the elastic behavior of inflatable plates of which Airmat is an example. Included in the theory are the effects of a small linear taper in the depth of the plate. Solutions are presented for some simple problems in the lateral deflection and vibration of constant-depth rectangular inflatable plates.
Linear equations in general purpose codes for stiff ODEs
Shampine, L. F.
1980-02-01
It is noted that it is possible to improve significantly the handling of linear problems in a general-purpose code with very little trouble to the user or change to the code. In such situations analytical evaluation of the Jacobian is a lot cheaper than numerical differencing. A slight change in the point at which the Jacobian is evaluated results in a more accurate Jacobian in linear problems. (RWR)
SUBOPT: A CAD program for suboptimal linear regulators
NASA Technical Reports Server (NTRS)
Fleming, P. J.
1985-01-01
An interactive software package which provides design solutions for both standard linear quadratic regulator (LQR) and suboptimal linear regulator problems is described. Intended for time-invariant continuous systems, the package is easily modified to include sampled-data systems. LQR designs are obtained by established techniques while the large class of suboptimal problems containing controller and/or performance index options is solved using a robust gradient minimization technique. Numerical examples demonstrate features of the package and recent developments are described.
Minimum-variance fixed-form compensation of linear systems
NASA Technical Reports Server (NTRS)
Johnson, T. L.
1979-01-01
The problem of determining the linear time-invariant compensator of a specified dimension which minimizes the asymptotic expected value of a quadratic form in the state variables of a linear stochastic system of arbitrary order, is considered. It is shown that under appropriate assumptions, the solution of this problem can be interpreted as a minimum-order observer-based or dual observer-based compensator for an optimally aggregated model of the plant.
Ensemble control of linear systems with parameter uncertainties
NASA Astrophysics Data System (ADS)
Kou, Kit Ian; Liu, Yang; Zhang, Dandan; Tu, Yanshuai
2016-07-01
In this paper, we study the optimal control problem for a class of four-dimensional linear systems based on quaternionic and Fourier analysis. When the control is unconstrained, the optimal ensemble controller for this linear ensemble control systems is given in terms of prolate spheroidal wave functions. For the constrained convex optimisation problem of such systems, the quadratic programming is presented to obtain the optimal control laws. Simulations are given to verity the effectiveness of the proposed theory.
Linearly exact parallel closures for slab geometry
Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun
2013-08-15
Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients)
Linearly exact parallel closures for slab geometry
NASA Astrophysics Data System (ADS)
Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun
2013-08-01
Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients).
Richter, B.; Bell, R.A.; Brown, K.L.
1980-06-01
The SLAC LINEAR COLLIDER is designed to achieve an energy of 100 GeV in the electron-positron center-of-mass system by accelerating intense bunches of particles in the SLAC linac and transporting the electron and positron bunches in a special magnet system to a point where they are focused to a radius of about 2 microns and made to collide head on. The rationale for this new type of colliding beam system is discussed, the project is described, some of the novel accelerator physics issues involved are discussed, and some of the critical technical components are described.
Ultrasonic linear measurement system
NASA Technical Reports Server (NTRS)
Marshall, Scot H. (Inventor)
1991-01-01
An ultrasonic linear measurement system uses the travel time of surface waves along the perimeter of a three-dimensional curvilinear body to determine the perimeter of the curvilinear body. The system can also be used piece-wise to measure distances along plane surfaces. The system can be used to measure perimeters where use of laser light, optical means or steel tape would be extremely difficult, time consuming or impossible. It can also be used to determine discontinuities in surfaces of known perimeter or dimension.
NASA Technical Reports Server (NTRS)
Perkins, Gerald S. (Inventor)
1980-01-01
A linear actuator which can apply high forces is described, which includes a reciprocating rod having a threaded portion engaged by a nut that is directly coupled to the rotor of an electric motor. The nut is connected to the rotor in a manner that minimizes loading on the rotor, by the use of a coupling that transmits torque to the nut but permits it to shift axially and radially with respect to the rotor. The nut has a threaded hydrostatic bearing for engaging the threaded rod portion, with an oilcarrying groove in the nut being interrupted.
Linear iterative solvers for implicit ODE methods
NASA Technical Reports Server (NTRS)
Saylor, Paul E.; Skeel, Robert D.
1990-01-01
The numerical solution of stiff initial value problems, which lead to the problem of solving large systems of mildly nonlinear equations are considered. For many problems derived from engineering and science, a solution is possible only with methods derived from iterative linear equation solvers. A common approach to solving the nonlinear equations is to employ an approximate solution obtained from an explicit method. The error is examined to determine how it is distributed among the stiff and non-stiff components, which bears on the choice of an iterative method. The conclusion is that error is (roughly) uniformly distributed, a fact that suggests the Chebyshev method (and the accompanying Manteuffel adaptive parameter algorithm). This method is described, also commenting on Richardson's method and its advantages for large problems. Richardson's method and the Chebyshev method with the Mantueffel algorithm are applied to the solution of the nonlinear equations by Newton's method.
Cryle, Max J; Hayes, Patricia Y; De Voss, James J
2012-12-01
The products of cytochrome P450(BM3)-catalysed oxidation of cyclopropyl-containing dodecanoic acids are consistent with the presence of a cationic reaction intermediate, which results in efficient dehydrogenation of the rearranged probes by the enzyme. These results highlight the importance of enzyme-substrate complementarity, with a cationic intermediate occurring only when the probes used begin to diverge from ideal substrates for this enzyme. This also aids in reconciling literature reports supporting the presence of cationic intermediates with certain cytochrome P450 enzyme/substrate pairs. PMID:23109039